00:00:00.001 Started by upstream project "autotest-per-patch" build number 121282 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.134 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.135 The recommended git tool is: git 00:00:00.135 using credential 00000000-0000-0000-0000-000000000002 00:00:00.137 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.166 Fetching changes from the remote Git repository 00:00:00.168 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.203 Using shallow fetch with depth 1 00:00:00.203 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.203 > git --version # timeout=10 00:00:00.221 > git --version # 'git version 2.39.2' 00:00:00.221 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.221 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.221 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.698 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.712 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.726 Checking out Revision f964f6d3463483adf05cc5c086f2abd292e05f1d (FETCH_HEAD) 00:00:05.726 > git config core.sparsecheckout # timeout=10 00:00:05.738 > git read-tree -mu HEAD # timeout=10 00:00:05.755 > git checkout -f f964f6d3463483adf05cc5c086f2abd292e05f1d # timeout=5 00:00:05.776 Commit message: "ansible/roles/custom_facts: Drop nvme features" 00:00:05.777 > git rev-list --no-walk f964f6d3463483adf05cc5c086f2abd292e05f1d # timeout=10 00:00:05.886 [Pipeline] Start of Pipeline 00:00:05.902 [Pipeline] library 00:00:05.903 Loading library shm_lib@master 00:00:05.903 Library shm_lib@master is cached. Copying from home. 00:00:05.938 [Pipeline] node 00:00:20.996 Still waiting to schedule task 00:00:20.996 Waiting for next available executor on ‘vagrant-vm-host’ 00:13:04.584 Running on VM-host-SM0 in /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:13:04.586 [Pipeline] { 00:13:04.599 [Pipeline] catchError 00:13:04.601 [Pipeline] { 00:13:04.616 [Pipeline] wrap 00:13:04.627 [Pipeline] { 00:13:04.635 [Pipeline] stage 00:13:04.638 [Pipeline] { (Prologue) 00:13:04.661 [Pipeline] echo 00:13:04.662 Node: VM-host-SM0 00:13:04.666 [Pipeline] cleanWs 00:13:04.673 [WS-CLEANUP] Deleting project workspace... 00:13:04.673 [WS-CLEANUP] Deferred wipeout is used... 00:13:04.679 [WS-CLEANUP] done 00:13:04.852 [Pipeline] setCustomBuildProperty 00:13:04.926 [Pipeline] nodesByLabel 00:13:04.928 Found a total of 1 nodes with the 'sorcerer' label 00:13:04.939 [Pipeline] httpRequest 00:13:04.943 HttpMethod: GET 00:13:04.943 URL: http://10.211.164.96/packages/jbp_f964f6d3463483adf05cc5c086f2abd292e05f1d.tar.gz 00:13:04.948 Sending request to url: http://10.211.164.96/packages/jbp_f964f6d3463483adf05cc5c086f2abd292e05f1d.tar.gz 00:13:04.950 Response Code: HTTP/1.1 200 OK 00:13:04.950 Success: Status code 200 is in the accepted range: 200,404 00:13:04.950 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp_f964f6d3463483adf05cc5c086f2abd292e05f1d.tar.gz 00:13:05.753 [Pipeline] sh 00:13:06.034 + tar --no-same-owner -xf jbp_f964f6d3463483adf05cc5c086f2abd292e05f1d.tar.gz 00:13:06.051 [Pipeline] httpRequest 00:13:06.056 HttpMethod: GET 00:13:06.056 URL: http://10.211.164.96/packages/spdk_2971e8ff3460ce72dc9fda494c3758eac7ec402d.tar.gz 00:13:06.056 Sending request to url: http://10.211.164.96/packages/spdk_2971e8ff3460ce72dc9fda494c3758eac7ec402d.tar.gz 00:13:06.068 Response Code: HTTP/1.1 200 OK 00:13:06.069 Success: Status code 200 is in the accepted range: 200,404 00:13:06.069 Saving response body to /var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk_2971e8ff3460ce72dc9fda494c3758eac7ec402d.tar.gz 00:13:18.307 [Pipeline] sh 00:13:18.594 + tar --no-same-owner -xf spdk_2971e8ff3460ce72dc9fda494c3758eac7ec402d.tar.gz 00:13:21.890 [Pipeline] sh 00:13:22.169 + git -C spdk log --oneline -n5 00:13:22.169 2971e8ff3 bdev: shorten trace name to "tid" 00:13:22.169 e9041dfc8 bdev: use "size" argument for num_blocks 00:13:22.169 23a6a33ce bdev: use local variable when tallying io histogram 00:13:22.169 d083919a9 bdev: do not try to track ioch elapsed time in trace 00:13:22.169 655ef2939 bdev: register and use trace owners 00:13:22.189 [Pipeline] writeFile 00:13:22.206 [Pipeline] sh 00:13:22.486 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:13:22.499 [Pipeline] sh 00:13:22.784 + cat autorun-spdk.conf 00:13:22.785 SPDK_RUN_FUNCTIONAL_TEST=1 00:13:22.785 SPDK_TEST_NVMF=1 00:13:22.785 SPDK_TEST_NVMF_TRANSPORT=tcp 00:13:22.785 SPDK_TEST_USDT=1 00:13:22.785 SPDK_TEST_NVMF_MDNS=1 00:13:22.785 SPDK_RUN_UBSAN=1 00:13:22.785 NET_TYPE=virt 00:13:22.785 SPDK_JSONRPC_GO_CLIENT=1 00:13:22.785 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:13:22.793 RUN_NIGHTLY=0 00:13:22.796 [Pipeline] } 00:13:22.807 [Pipeline] // stage 00:13:22.817 [Pipeline] stage 00:13:22.819 [Pipeline] { (Run VM) 00:13:22.829 [Pipeline] sh 00:13:23.102 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:13:23.103 + echo 'Start stage prepare_nvme.sh' 00:13:23.103 Start stage prepare_nvme.sh 00:13:23.103 + [[ -n 3 ]] 00:13:23.103 + disk_prefix=ex3 00:13:23.103 + [[ -n /var/jenkins/workspace/nvmf-tcp-vg-autotest ]] 00:13:23.103 + [[ -e /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf ]] 00:13:23.103 + source /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf 00:13:23.103 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:13:23.103 ++ SPDK_TEST_NVMF=1 00:13:23.103 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:13:23.103 ++ SPDK_TEST_USDT=1 00:13:23.103 ++ SPDK_TEST_NVMF_MDNS=1 00:13:23.103 ++ SPDK_RUN_UBSAN=1 00:13:23.103 ++ NET_TYPE=virt 00:13:23.103 ++ SPDK_JSONRPC_GO_CLIENT=1 00:13:23.103 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:13:23.103 ++ RUN_NIGHTLY=0 00:13:23.103 + cd /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:13:23.103 + nvme_files=() 00:13:23.103 + declare -A nvme_files 00:13:23.103 + backend_dir=/var/lib/libvirt/images/backends 00:13:23.103 + nvme_files['nvme.img']=5G 00:13:23.103 + nvme_files['nvme-cmb.img']=5G 00:13:23.103 + nvme_files['nvme-multi0.img']=4G 00:13:23.103 + nvme_files['nvme-multi1.img']=4G 00:13:23.103 + nvme_files['nvme-multi2.img']=4G 00:13:23.103 + nvme_files['nvme-openstack.img']=8G 00:13:23.103 + nvme_files['nvme-zns.img']=5G 00:13:23.103 + (( SPDK_TEST_NVME_PMR == 1 )) 00:13:23.103 + (( SPDK_TEST_FTL == 1 )) 00:13:23.103 + (( SPDK_TEST_NVME_FDP == 1 )) 00:13:23.103 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:13:23.103 + for nvme in "${!nvme_files[@]}" 00:13:23.103 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi2.img -s 4G 00:13:23.103 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:13:23.103 + for nvme in "${!nvme_files[@]}" 00:13:23.103 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-cmb.img -s 5G 00:13:23.103 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:13:23.103 + for nvme in "${!nvme_files[@]}" 00:13:23.103 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-openstack.img -s 8G 00:13:23.103 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:13:23.103 + for nvme in "${!nvme_files[@]}" 00:13:23.103 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-zns.img -s 5G 00:13:23.103 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:13:23.103 + for nvme in "${!nvme_files[@]}" 00:13:23.103 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi1.img -s 4G 00:13:23.103 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:13:23.103 + for nvme in "${!nvme_files[@]}" 00:13:23.103 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme-multi0.img -s 4G 00:13:23.103 Formatting '/var/lib/libvirt/images/backends/ex3-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:13:23.103 + for nvme in "${!nvme_files[@]}" 00:13:23.103 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex3-nvme.img -s 5G 00:13:23.360 Formatting '/var/lib/libvirt/images/backends/ex3-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:13:23.360 ++ sudo grep -rl ex3-nvme.img /etc/libvirt/qemu 00:13:23.360 + echo 'End stage prepare_nvme.sh' 00:13:23.360 End stage prepare_nvme.sh 00:13:23.371 [Pipeline] sh 00:13:23.650 + DISTRO=fedora38 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:13:23.650 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex3-nvme.img -b /var/lib/libvirt/images/backends/ex3-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img -H -a -v -f fedora38 00:13:23.650 00:13:23.650 DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant 00:13:23.650 SPDK_DIR=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk 00:13:23.650 VAGRANT_TARGET=/var/jenkins/workspace/nvmf-tcp-vg-autotest 00:13:23.650 HELP=0 00:13:23.650 DRY_RUN=0 00:13:23.650 NVME_FILE=/var/lib/libvirt/images/backends/ex3-nvme.img,/var/lib/libvirt/images/backends/ex3-nvme-multi0.img, 00:13:23.650 NVME_DISKS_TYPE=nvme,nvme, 00:13:23.650 NVME_AUTO_CREATE=0 00:13:23.650 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex3-nvme-multi1.img:/var/lib/libvirt/images/backends/ex3-nvme-multi2.img, 00:13:23.650 NVME_CMB=,, 00:13:23.650 NVME_PMR=,, 00:13:23.650 NVME_ZNS=,, 00:13:23.650 NVME_MS=,, 00:13:23.650 NVME_FDP=,, 00:13:23.650 SPDK_VAGRANT_DISTRO=fedora38 00:13:23.650 SPDK_VAGRANT_VMCPU=10 00:13:23.650 SPDK_VAGRANT_VMRAM=12288 00:13:23.650 SPDK_VAGRANT_PROVIDER=libvirt 00:13:23.650 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:13:23.650 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:13:23.650 SPDK_OPENSTACK_NETWORK=0 00:13:23.650 VAGRANT_PACKAGE_BOX=0 00:13:23.650 VAGRANTFILE=/var/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:13:23.650 FORCE_DISTRO=true 00:13:23.650 VAGRANT_BOX_VERSION= 00:13:23.650 EXTRA_VAGRANTFILES= 00:13:23.650 NIC_MODEL=e1000 00:13:23.650 00:13:23.650 mkdir: created directory '/var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt' 00:13:23.650 /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt /var/jenkins/workspace/nvmf-tcp-vg-autotest 00:13:26.933 Bringing machine 'default' up with 'libvirt' provider... 00:13:28.310 ==> default: Creating image (snapshot of base box volume). 00:13:28.310 ==> default: Creating domain with the following settings... 00:13:28.310 ==> default: -- Name: fedora38-38-1.6-1705279005-2131_default_1714145517_895c2162b2dfe6362a31 00:13:28.310 ==> default: -- Domain type: kvm 00:13:28.310 ==> default: -- Cpus: 10 00:13:28.310 ==> default: -- Feature: acpi 00:13:28.310 ==> default: -- Feature: apic 00:13:28.310 ==> default: -- Feature: pae 00:13:28.310 ==> default: -- Memory: 12288M 00:13:28.310 ==> default: -- Memory Backing: hugepages: 00:13:28.310 ==> default: -- Management MAC: 00:13:28.310 ==> default: -- Loader: 00:13:28.310 ==> default: -- Nvram: 00:13:28.310 ==> default: -- Base box: spdk/fedora38 00:13:28.310 ==> default: -- Storage pool: default 00:13:28.310 ==> default: -- Image: /var/lib/libvirt/images/fedora38-38-1.6-1705279005-2131_default_1714145517_895c2162b2dfe6362a31.img (20G) 00:13:28.310 ==> default: -- Volume Cache: default 00:13:28.310 ==> default: -- Kernel: 00:13:28.310 ==> default: -- Initrd: 00:13:28.310 ==> default: -- Graphics Type: vnc 00:13:28.310 ==> default: -- Graphics Port: -1 00:13:28.310 ==> default: -- Graphics IP: 127.0.0.1 00:13:28.310 ==> default: -- Graphics Password: Not defined 00:13:28.310 ==> default: -- Video Type: cirrus 00:13:28.310 ==> default: -- Video VRAM: 9216 00:13:28.310 ==> default: -- Sound Type: 00:13:28.310 ==> default: -- Keymap: en-us 00:13:28.310 ==> default: -- TPM Path: 00:13:28.310 ==> default: -- INPUT: type=mouse, bus=ps2 00:13:28.310 ==> default: -- Command line args: 00:13:28.310 ==> default: -> value=-device, 00:13:28.310 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:13:28.310 ==> default: -> value=-drive, 00:13:28.310 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme.img,if=none,id=nvme-0-drive0, 00:13:28.310 ==> default: -> value=-device, 00:13:28.310 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:13:28.310 ==> default: -> value=-device, 00:13:28.310 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:13:28.310 ==> default: -> value=-drive, 00:13:28.310 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:13:28.310 ==> default: -> value=-device, 00:13:28.310 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:13:28.310 ==> default: -> value=-drive, 00:13:28.310 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:13:28.310 ==> default: -> value=-device, 00:13:28.310 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:13:28.310 ==> default: -> value=-drive, 00:13:28.310 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex3-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:13:28.310 ==> default: -> value=-device, 00:13:28.310 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:13:28.569 ==> default: Creating shared folders metadata... 00:13:28.569 ==> default: Starting domain. 00:13:30.472 ==> default: Waiting for domain to get an IP address... 00:13:48.550 ==> default: Waiting for SSH to become available... 00:13:49.927 ==> default: Configuring and enabling network interfaces... 00:13:54.116 default: SSH address: 192.168.121.124:22 00:13:54.116 default: SSH username: vagrant 00:13:54.116 default: SSH auth method: private key 00:13:56.645 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:14:04.829 ==> default: Mounting SSHFS shared folder... 00:14:05.812 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output => /home/vagrant/spdk_repo/output 00:14:05.812 ==> default: Checking Mount.. 00:14:06.747 ==> default: Folder Successfully Mounted! 00:14:06.747 ==> default: Running provisioner: file... 00:14:07.682 default: ~/.gitconfig => .gitconfig 00:14:07.941 00:14:07.941 SUCCESS! 00:14:07.941 00:14:07.941 cd to /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt and type "vagrant ssh" to use. 00:14:07.941 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:14:07.941 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt" to destroy all trace of vm. 00:14:07.941 00:14:07.951 [Pipeline] } 00:14:07.968 [Pipeline] // stage 00:14:07.975 [Pipeline] dir 00:14:07.976 Running in /var/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt 00:14:07.977 [Pipeline] { 00:14:07.986 [Pipeline] catchError 00:14:07.987 [Pipeline] { 00:14:07.999 [Pipeline] sh 00:14:08.277 + vagrant ssh-config --host vagrant 00:14:08.277 + sed -ne /^Host/,$p 00:14:08.277 + tee ssh_conf 00:14:11.560 Host vagrant 00:14:11.560 HostName 192.168.121.124 00:14:11.560 User vagrant 00:14:11.560 Port 22 00:14:11.560 UserKnownHostsFile /dev/null 00:14:11.560 StrictHostKeyChecking no 00:14:11.560 PasswordAuthentication no 00:14:11.560 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora38/38-1.6-1705279005-2131/libvirt/fedora38 00:14:11.560 IdentitiesOnly yes 00:14:11.560 LogLevel FATAL 00:14:11.560 ForwardAgent yes 00:14:11.560 ForwardX11 yes 00:14:11.560 00:14:11.577 [Pipeline] withEnv 00:14:11.579 [Pipeline] { 00:14:11.595 [Pipeline] sh 00:14:11.902 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:14:11.902 source /etc/os-release 00:14:11.902 [[ -e /image.version ]] && img=$(< /image.version) 00:14:11.902 # Minimal, systemd-like check. 00:14:11.902 if [[ -e /.dockerenv ]]; then 00:14:11.902 # Clear garbage from the node's name: 00:14:11.902 # agt-er_autotest_547-896 -> autotest_547-896 00:14:11.902 # $HOSTNAME is the actual container id 00:14:11.902 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:14:11.902 if mountpoint -q /etc/hostname; then 00:14:11.902 # We can assume this is a mount from a host where container is running, 00:14:11.902 # so fetch its hostname to easily identify the target swarm worker. 00:14:11.902 container="$(< /etc/hostname) ($agent)" 00:14:11.902 else 00:14:11.902 # Fallback 00:14:11.902 container=$agent 00:14:11.902 fi 00:14:11.902 fi 00:14:11.902 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:14:11.902 00:14:11.912 [Pipeline] } 00:14:11.931 [Pipeline] // withEnv 00:14:11.939 [Pipeline] setCustomBuildProperty 00:14:11.955 [Pipeline] stage 00:14:11.958 [Pipeline] { (Tests) 00:14:11.977 [Pipeline] sh 00:14:12.256 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:14:12.528 [Pipeline] timeout 00:14:12.529 Timeout set to expire in 40 min 00:14:12.531 [Pipeline] { 00:14:12.548 [Pipeline] sh 00:14:12.829 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:14:13.397 HEAD is now at 2971e8ff3 bdev: shorten trace name to "tid" 00:14:13.410 [Pipeline] sh 00:14:13.689 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:14:13.959 [Pipeline] sh 00:14:14.237 + scp -F ssh_conf -r /var/jenkins/workspace/nvmf-tcp-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:14:14.529 [Pipeline] sh 00:14:14.805 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant ./autoruner.sh spdk_repo 00:14:15.064 ++ readlink -f spdk_repo 00:14:15.064 + DIR_ROOT=/home/vagrant/spdk_repo 00:14:15.064 + [[ -n /home/vagrant/spdk_repo ]] 00:14:15.064 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:14:15.064 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:14:15.064 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:14:15.064 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:14:15.064 + [[ -d /home/vagrant/spdk_repo/output ]] 00:14:15.064 + cd /home/vagrant/spdk_repo 00:14:15.064 + source /etc/os-release 00:14:15.064 ++ NAME='Fedora Linux' 00:14:15.064 ++ VERSION='38 (Cloud Edition)' 00:14:15.064 ++ ID=fedora 00:14:15.064 ++ VERSION_ID=38 00:14:15.064 ++ VERSION_CODENAME= 00:14:15.064 ++ PLATFORM_ID=platform:f38 00:14:15.064 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:14:15.064 ++ ANSI_COLOR='0;38;2;60;110;180' 00:14:15.064 ++ LOGO=fedora-logo-icon 00:14:15.064 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:14:15.064 ++ HOME_URL=https://fedoraproject.org/ 00:14:15.064 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:14:15.064 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:14:15.064 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:14:15.064 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:14:15.064 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:14:15.064 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:14:15.064 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:14:15.064 ++ SUPPORT_END=2024-05-14 00:14:15.064 ++ VARIANT='Cloud Edition' 00:14:15.064 ++ VARIANT_ID=cloud 00:14:15.064 + uname -a 00:14:15.064 Linux fedora38-cloud-1705279005-2131 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:14:15.064 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:14:15.322 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:14:15.322 Hugepages 00:14:15.322 node hugesize free / total 00:14:15.322 node0 1048576kB 0 / 0 00:14:15.581 node0 2048kB 0 / 0 00:14:15.581 00:14:15.581 Type BDF Vendor Device NUMA Driver Device Block devices 00:14:15.581 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:14:15.581 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:14:15.581 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:14:15.581 + rm -f /tmp/spdk-ld-path 00:14:15.581 + source autorun-spdk.conf 00:14:15.581 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:14:15.581 ++ SPDK_TEST_NVMF=1 00:14:15.581 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:14:15.581 ++ SPDK_TEST_USDT=1 00:14:15.581 ++ SPDK_TEST_NVMF_MDNS=1 00:14:15.581 ++ SPDK_RUN_UBSAN=1 00:14:15.581 ++ NET_TYPE=virt 00:14:15.581 ++ SPDK_JSONRPC_GO_CLIENT=1 00:14:15.581 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:14:15.581 ++ RUN_NIGHTLY=0 00:14:15.581 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:14:15.581 + [[ -n '' ]] 00:14:15.581 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:14:15.581 + for M in /var/spdk/build-*-manifest.txt 00:14:15.581 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:14:15.581 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:14:15.581 + for M in /var/spdk/build-*-manifest.txt 00:14:15.581 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:14:15.581 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:14:15.581 ++ uname 00:14:15.581 + [[ Linux == \L\i\n\u\x ]] 00:14:15.581 + sudo dmesg -T 00:14:15.581 + sudo dmesg --clear 00:14:15.841 + dmesg_pid=5163 00:14:15.841 + sudo dmesg -Tw 00:14:15.841 + [[ Fedora Linux == FreeBSD ]] 00:14:15.841 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:14:15.841 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:14:15.841 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:14:15.841 + [[ -x /usr/src/fio-static/fio ]] 00:14:15.841 + export FIO_BIN=/usr/src/fio-static/fio 00:14:15.841 + FIO_BIN=/usr/src/fio-static/fio 00:14:15.841 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:14:15.841 + [[ ! -v VFIO_QEMU_BIN ]] 00:14:15.841 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:14:15.841 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:14:15.841 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:14:15.841 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:14:15.841 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:14:15.841 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:14:15.841 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:14:15.841 Test configuration: 00:14:15.841 SPDK_RUN_FUNCTIONAL_TEST=1 00:14:15.841 SPDK_TEST_NVMF=1 00:14:15.841 SPDK_TEST_NVMF_TRANSPORT=tcp 00:14:15.841 SPDK_TEST_USDT=1 00:14:15.841 SPDK_TEST_NVMF_MDNS=1 00:14:15.841 SPDK_RUN_UBSAN=1 00:14:15.841 NET_TYPE=virt 00:14:15.841 SPDK_JSONRPC_GO_CLIENT=1 00:14:15.841 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:14:15.841 RUN_NIGHTLY=0 15:32:45 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:14:15.841 15:32:45 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:14:15.841 15:32:45 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:15.841 15:32:45 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:15.841 15:32:45 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.841 15:32:45 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.841 15:32:45 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.841 15:32:45 -- paths/export.sh@5 -- $ export PATH 00:14:15.841 15:32:45 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:15.841 15:32:45 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:14:15.841 15:32:45 -- common/autobuild_common.sh@435 -- $ date +%s 00:14:15.841 15:32:45 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1714145565.XXXXXX 00:14:15.841 15:32:45 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1714145565.Qsv7og 00:14:15.841 15:32:45 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:14:15.841 15:32:45 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:14:15.841 15:32:45 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:14:15.841 15:32:45 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:14:15.841 15:32:45 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:14:15.841 15:32:45 -- common/autobuild_common.sh@451 -- $ get_config_params 00:14:15.841 15:32:45 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:14:15.841 15:32:45 -- common/autotest_common.sh@10 -- $ set +x 00:14:15.841 15:32:46 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:14:15.841 15:32:46 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:14:15.841 15:32:46 -- pm/common@17 -- $ local monitor 00:14:15.841 15:32:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:14:15.841 15:32:46 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=5198 00:14:15.841 15:32:46 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:14:15.841 15:32:46 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=5200 00:14:15.841 15:32:46 -- pm/common@26 -- $ sleep 1 00:14:15.841 15:32:46 -- pm/common@21 -- $ date +%s 00:14:15.841 15:32:46 -- pm/common@21 -- $ date +%s 00:14:15.841 15:32:46 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1714145566 00:14:15.841 15:32:46 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1714145566 00:14:15.841 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1714145566_collect-vmstat.pm.log 00:14:15.841 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1714145566_collect-cpu-load.pm.log 00:14:16.777 15:32:47 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:14:16.777 15:32:47 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:14:16.777 15:32:47 -- spdk/autobuild.sh@12 -- $ umask 022 00:14:16.777 15:32:47 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:14:16.777 15:32:47 -- spdk/autobuild.sh@16 -- $ date -u 00:14:16.777 Fri Apr 26 03:32:47 PM UTC 2024 00:14:16.777 15:32:47 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:14:16.777 v24.05-pre-449-g2971e8ff3 00:14:16.777 15:32:47 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:14:16.777 15:32:47 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:14:16.777 15:32:47 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:14:16.777 15:32:47 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:14:16.777 15:32:47 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:14:16.777 15:32:47 -- common/autotest_common.sh@10 -- $ set +x 00:14:17.036 ************************************ 00:14:17.036 START TEST ubsan 00:14:17.036 ************************************ 00:14:17.036 using ubsan 00:14:17.036 15:32:47 -- common/autotest_common.sh@1111 -- $ echo 'using ubsan' 00:14:17.036 00:14:17.036 real 0m0.000s 00:14:17.036 user 0m0.000s 00:14:17.036 sys 0m0.000s 00:14:17.036 15:32:47 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:14:17.036 15:32:47 -- common/autotest_common.sh@10 -- $ set +x 00:14:17.036 ************************************ 00:14:17.036 END TEST ubsan 00:14:17.036 ************************************ 00:14:17.036 15:32:47 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:14:17.036 15:32:47 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:14:17.036 15:32:47 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:14:17.036 15:32:47 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:14:17.036 15:32:47 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:14:17.036 15:32:47 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:14:17.036 15:32:47 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:14:17.036 15:32:47 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:14:17.036 15:32:47 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang --with-shared 00:14:17.036 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:14:17.036 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:14:17.603 Using 'verbs' RDMA provider 00:14:33.051 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:14:45.357 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:14:45.357 go version go1.21.1 linux/amd64 00:14:45.357 Creating mk/config.mk...done. 00:14:45.357 Creating mk/cc.flags.mk...done. 00:14:45.357 Type 'make' to build. 00:14:45.357 15:33:14 -- spdk/autobuild.sh@69 -- $ run_test make make -j10 00:14:45.357 15:33:14 -- common/autotest_common.sh@1087 -- $ '[' 3 -le 1 ']' 00:14:45.357 15:33:14 -- common/autotest_common.sh@1093 -- $ xtrace_disable 00:14:45.357 15:33:14 -- common/autotest_common.sh@10 -- $ set +x 00:14:45.357 ************************************ 00:14:45.357 START TEST make 00:14:45.357 ************************************ 00:14:45.357 15:33:14 -- common/autotest_common.sh@1111 -- $ make -j10 00:14:45.357 make[1]: Nothing to be done for 'all'. 00:14:55.381 The Meson build system 00:14:55.381 Version: 1.3.1 00:14:55.381 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:14:55.381 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:14:55.381 Build type: native build 00:14:55.381 Program cat found: YES (/usr/bin/cat) 00:14:55.381 Project name: DPDK 00:14:55.381 Project version: 23.11.0 00:14:55.381 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:14:55.381 C linker for the host machine: cc ld.bfd 2.39-16 00:14:55.381 Host machine cpu family: x86_64 00:14:55.381 Host machine cpu: x86_64 00:14:55.381 Message: ## Building in Developer Mode ## 00:14:55.381 Program pkg-config found: YES (/usr/bin/pkg-config) 00:14:55.381 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:14:55.381 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:14:55.381 Program python3 found: YES (/usr/bin/python3) 00:14:55.382 Program cat found: YES (/usr/bin/cat) 00:14:55.382 Compiler for C supports arguments -march=native: YES 00:14:55.382 Checking for size of "void *" : 8 00:14:55.382 Checking for size of "void *" : 8 (cached) 00:14:55.382 Library m found: YES 00:14:55.382 Library numa found: YES 00:14:55.382 Has header "numaif.h" : YES 00:14:55.382 Library fdt found: NO 00:14:55.382 Library execinfo found: NO 00:14:55.382 Has header "execinfo.h" : YES 00:14:55.382 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:14:55.382 Run-time dependency libarchive found: NO (tried pkgconfig) 00:14:55.382 Run-time dependency libbsd found: NO (tried pkgconfig) 00:14:55.382 Run-time dependency jansson found: NO (tried pkgconfig) 00:14:55.382 Run-time dependency openssl found: YES 3.0.9 00:14:55.382 Run-time dependency libpcap found: YES 1.10.4 00:14:55.382 Has header "pcap.h" with dependency libpcap: YES 00:14:55.382 Compiler for C supports arguments -Wcast-qual: YES 00:14:55.382 Compiler for C supports arguments -Wdeprecated: YES 00:14:55.382 Compiler for C supports arguments -Wformat: YES 00:14:55.382 Compiler for C supports arguments -Wformat-nonliteral: NO 00:14:55.382 Compiler for C supports arguments -Wformat-security: NO 00:14:55.382 Compiler for C supports arguments -Wmissing-declarations: YES 00:14:55.382 Compiler for C supports arguments -Wmissing-prototypes: YES 00:14:55.382 Compiler for C supports arguments -Wnested-externs: YES 00:14:55.382 Compiler for C supports arguments -Wold-style-definition: YES 00:14:55.382 Compiler for C supports arguments -Wpointer-arith: YES 00:14:55.382 Compiler for C supports arguments -Wsign-compare: YES 00:14:55.382 Compiler for C supports arguments -Wstrict-prototypes: YES 00:14:55.382 Compiler for C supports arguments -Wundef: YES 00:14:55.382 Compiler for C supports arguments -Wwrite-strings: YES 00:14:55.382 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:14:55.382 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:14:55.382 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:14:55.382 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:14:55.382 Program objdump found: YES (/usr/bin/objdump) 00:14:55.382 Compiler for C supports arguments -mavx512f: YES 00:14:55.382 Checking if "AVX512 checking" compiles: YES 00:14:55.382 Fetching value of define "__SSE4_2__" : 1 00:14:55.382 Fetching value of define "__AES__" : 1 00:14:55.382 Fetching value of define "__AVX__" : 1 00:14:55.382 Fetching value of define "__AVX2__" : 1 00:14:55.382 Fetching value of define "__AVX512BW__" : (undefined) 00:14:55.382 Fetching value of define "__AVX512CD__" : (undefined) 00:14:55.382 Fetching value of define "__AVX512DQ__" : (undefined) 00:14:55.382 Fetching value of define "__AVX512F__" : (undefined) 00:14:55.382 Fetching value of define "__AVX512VL__" : (undefined) 00:14:55.382 Fetching value of define "__PCLMUL__" : 1 00:14:55.382 Fetching value of define "__RDRND__" : 1 00:14:55.382 Fetching value of define "__RDSEED__" : 1 00:14:55.382 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:14:55.382 Fetching value of define "__znver1__" : (undefined) 00:14:55.382 Fetching value of define "__znver2__" : (undefined) 00:14:55.382 Fetching value of define "__znver3__" : (undefined) 00:14:55.382 Fetching value of define "__znver4__" : (undefined) 00:14:55.382 Compiler for C supports arguments -Wno-format-truncation: YES 00:14:55.382 Message: lib/log: Defining dependency "log" 00:14:55.382 Message: lib/kvargs: Defining dependency "kvargs" 00:14:55.382 Message: lib/telemetry: Defining dependency "telemetry" 00:14:55.382 Checking for function "getentropy" : NO 00:14:55.382 Message: lib/eal: Defining dependency "eal" 00:14:55.382 Message: lib/ring: Defining dependency "ring" 00:14:55.382 Message: lib/rcu: Defining dependency "rcu" 00:14:55.382 Message: lib/mempool: Defining dependency "mempool" 00:14:55.382 Message: lib/mbuf: Defining dependency "mbuf" 00:14:55.382 Fetching value of define "__PCLMUL__" : 1 (cached) 00:14:55.382 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:14:55.382 Compiler for C supports arguments -mpclmul: YES 00:14:55.382 Compiler for C supports arguments -maes: YES 00:14:55.382 Compiler for C supports arguments -mavx512f: YES (cached) 00:14:55.382 Compiler for C supports arguments -mavx512bw: YES 00:14:55.382 Compiler for C supports arguments -mavx512dq: YES 00:14:55.382 Compiler for C supports arguments -mavx512vl: YES 00:14:55.382 Compiler for C supports arguments -mvpclmulqdq: YES 00:14:55.382 Compiler for C supports arguments -mavx2: YES 00:14:55.382 Compiler for C supports arguments -mavx: YES 00:14:55.382 Message: lib/net: Defining dependency "net" 00:14:55.382 Message: lib/meter: Defining dependency "meter" 00:14:55.382 Message: lib/ethdev: Defining dependency "ethdev" 00:14:55.382 Message: lib/pci: Defining dependency "pci" 00:14:55.382 Message: lib/cmdline: Defining dependency "cmdline" 00:14:55.382 Message: lib/hash: Defining dependency "hash" 00:14:55.382 Message: lib/timer: Defining dependency "timer" 00:14:55.382 Message: lib/compressdev: Defining dependency "compressdev" 00:14:55.382 Message: lib/cryptodev: Defining dependency "cryptodev" 00:14:55.382 Message: lib/dmadev: Defining dependency "dmadev" 00:14:55.382 Compiler for C supports arguments -Wno-cast-qual: YES 00:14:55.382 Message: lib/power: Defining dependency "power" 00:14:55.382 Message: lib/reorder: Defining dependency "reorder" 00:14:55.382 Message: lib/security: Defining dependency "security" 00:14:55.382 Has header "linux/userfaultfd.h" : YES 00:14:55.382 Has header "linux/vduse.h" : YES 00:14:55.382 Message: lib/vhost: Defining dependency "vhost" 00:14:55.382 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:14:55.382 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:14:55.382 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:14:55.382 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:14:55.382 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:14:55.382 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:14:55.382 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:14:55.382 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:14:55.382 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:14:55.382 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:14:55.382 Program doxygen found: YES (/usr/bin/doxygen) 00:14:55.382 Configuring doxy-api-html.conf using configuration 00:14:55.382 Configuring doxy-api-man.conf using configuration 00:14:55.382 Program mandb found: YES (/usr/bin/mandb) 00:14:55.382 Program sphinx-build found: NO 00:14:55.382 Configuring rte_build_config.h using configuration 00:14:55.382 Message: 00:14:55.382 ================= 00:14:55.382 Applications Enabled 00:14:55.382 ================= 00:14:55.382 00:14:55.382 apps: 00:14:55.382 00:14:55.382 00:14:55.382 Message: 00:14:55.382 ================= 00:14:55.382 Libraries Enabled 00:14:55.382 ================= 00:14:55.382 00:14:55.382 libs: 00:14:55.382 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:14:55.382 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:14:55.382 cryptodev, dmadev, power, reorder, security, vhost, 00:14:55.382 00:14:55.382 Message: 00:14:55.382 =============== 00:14:55.382 Drivers Enabled 00:14:55.382 =============== 00:14:55.382 00:14:55.382 common: 00:14:55.382 00:14:55.382 bus: 00:14:55.382 pci, vdev, 00:14:55.382 mempool: 00:14:55.382 ring, 00:14:55.382 dma: 00:14:55.382 00:14:55.382 net: 00:14:55.382 00:14:55.382 crypto: 00:14:55.382 00:14:55.382 compress: 00:14:55.382 00:14:55.382 vdpa: 00:14:55.382 00:14:55.382 00:14:55.382 Message: 00:14:55.382 ================= 00:14:55.382 Content Skipped 00:14:55.382 ================= 00:14:55.382 00:14:55.382 apps: 00:14:55.382 dumpcap: explicitly disabled via build config 00:14:55.382 graph: explicitly disabled via build config 00:14:55.382 pdump: explicitly disabled via build config 00:14:55.382 proc-info: explicitly disabled via build config 00:14:55.382 test-acl: explicitly disabled via build config 00:14:55.382 test-bbdev: explicitly disabled via build config 00:14:55.382 test-cmdline: explicitly disabled via build config 00:14:55.382 test-compress-perf: explicitly disabled via build config 00:14:55.382 test-crypto-perf: explicitly disabled via build config 00:14:55.382 test-dma-perf: explicitly disabled via build config 00:14:55.382 test-eventdev: explicitly disabled via build config 00:14:55.382 test-fib: explicitly disabled via build config 00:14:55.382 test-flow-perf: explicitly disabled via build config 00:14:55.382 test-gpudev: explicitly disabled via build config 00:14:55.382 test-mldev: explicitly disabled via build config 00:14:55.382 test-pipeline: explicitly disabled via build config 00:14:55.382 test-pmd: explicitly disabled via build config 00:14:55.382 test-regex: explicitly disabled via build config 00:14:55.382 test-sad: explicitly disabled via build config 00:14:55.382 test-security-perf: explicitly disabled via build config 00:14:55.382 00:14:55.382 libs: 00:14:55.382 metrics: explicitly disabled via build config 00:14:55.382 acl: explicitly disabled via build config 00:14:55.382 bbdev: explicitly disabled via build config 00:14:55.382 bitratestats: explicitly disabled via build config 00:14:55.383 bpf: explicitly disabled via build config 00:14:55.383 cfgfile: explicitly disabled via build config 00:14:55.383 distributor: explicitly disabled via build config 00:14:55.383 efd: explicitly disabled via build config 00:14:55.383 eventdev: explicitly disabled via build config 00:14:55.383 dispatcher: explicitly disabled via build config 00:14:55.383 gpudev: explicitly disabled via build config 00:14:55.383 gro: explicitly disabled via build config 00:14:55.383 gso: explicitly disabled via build config 00:14:55.383 ip_frag: explicitly disabled via build config 00:14:55.383 jobstats: explicitly disabled via build config 00:14:55.383 latencystats: explicitly disabled via build config 00:14:55.383 lpm: explicitly disabled via build config 00:14:55.383 member: explicitly disabled via build config 00:14:55.383 pcapng: explicitly disabled via build config 00:14:55.383 rawdev: explicitly disabled via build config 00:14:55.383 regexdev: explicitly disabled via build config 00:14:55.383 mldev: explicitly disabled via build config 00:14:55.383 rib: explicitly disabled via build config 00:14:55.383 sched: explicitly disabled via build config 00:14:55.383 stack: explicitly disabled via build config 00:14:55.383 ipsec: explicitly disabled via build config 00:14:55.383 pdcp: explicitly disabled via build config 00:14:55.383 fib: explicitly disabled via build config 00:14:55.383 port: explicitly disabled via build config 00:14:55.383 pdump: explicitly disabled via build config 00:14:55.383 table: explicitly disabled via build config 00:14:55.383 pipeline: explicitly disabled via build config 00:14:55.383 graph: explicitly disabled via build config 00:14:55.383 node: explicitly disabled via build config 00:14:55.383 00:14:55.383 drivers: 00:14:55.383 common/cpt: not in enabled drivers build config 00:14:55.383 common/dpaax: not in enabled drivers build config 00:14:55.383 common/iavf: not in enabled drivers build config 00:14:55.383 common/idpf: not in enabled drivers build config 00:14:55.383 common/mvep: not in enabled drivers build config 00:14:55.383 common/octeontx: not in enabled drivers build config 00:14:55.383 bus/auxiliary: not in enabled drivers build config 00:14:55.383 bus/cdx: not in enabled drivers build config 00:14:55.383 bus/dpaa: not in enabled drivers build config 00:14:55.383 bus/fslmc: not in enabled drivers build config 00:14:55.383 bus/ifpga: not in enabled drivers build config 00:14:55.383 bus/platform: not in enabled drivers build config 00:14:55.383 bus/vmbus: not in enabled drivers build config 00:14:55.383 common/cnxk: not in enabled drivers build config 00:14:55.383 common/mlx5: not in enabled drivers build config 00:14:55.383 common/nfp: not in enabled drivers build config 00:14:55.383 common/qat: not in enabled drivers build config 00:14:55.383 common/sfc_efx: not in enabled drivers build config 00:14:55.383 mempool/bucket: not in enabled drivers build config 00:14:55.383 mempool/cnxk: not in enabled drivers build config 00:14:55.383 mempool/dpaa: not in enabled drivers build config 00:14:55.383 mempool/dpaa2: not in enabled drivers build config 00:14:55.383 mempool/octeontx: not in enabled drivers build config 00:14:55.383 mempool/stack: not in enabled drivers build config 00:14:55.383 dma/cnxk: not in enabled drivers build config 00:14:55.383 dma/dpaa: not in enabled drivers build config 00:14:55.383 dma/dpaa2: not in enabled drivers build config 00:14:55.383 dma/hisilicon: not in enabled drivers build config 00:14:55.383 dma/idxd: not in enabled drivers build config 00:14:55.383 dma/ioat: not in enabled drivers build config 00:14:55.383 dma/skeleton: not in enabled drivers build config 00:14:55.383 net/af_packet: not in enabled drivers build config 00:14:55.383 net/af_xdp: not in enabled drivers build config 00:14:55.383 net/ark: not in enabled drivers build config 00:14:55.383 net/atlantic: not in enabled drivers build config 00:14:55.383 net/avp: not in enabled drivers build config 00:14:55.383 net/axgbe: not in enabled drivers build config 00:14:55.383 net/bnx2x: not in enabled drivers build config 00:14:55.383 net/bnxt: not in enabled drivers build config 00:14:55.383 net/bonding: not in enabled drivers build config 00:14:55.383 net/cnxk: not in enabled drivers build config 00:14:55.383 net/cpfl: not in enabled drivers build config 00:14:55.383 net/cxgbe: not in enabled drivers build config 00:14:55.383 net/dpaa: not in enabled drivers build config 00:14:55.383 net/dpaa2: not in enabled drivers build config 00:14:55.383 net/e1000: not in enabled drivers build config 00:14:55.383 net/ena: not in enabled drivers build config 00:14:55.383 net/enetc: not in enabled drivers build config 00:14:55.383 net/enetfec: not in enabled drivers build config 00:14:55.383 net/enic: not in enabled drivers build config 00:14:55.383 net/failsafe: not in enabled drivers build config 00:14:55.383 net/fm10k: not in enabled drivers build config 00:14:55.383 net/gve: not in enabled drivers build config 00:14:55.383 net/hinic: not in enabled drivers build config 00:14:55.383 net/hns3: not in enabled drivers build config 00:14:55.383 net/i40e: not in enabled drivers build config 00:14:55.383 net/iavf: not in enabled drivers build config 00:14:55.383 net/ice: not in enabled drivers build config 00:14:55.383 net/idpf: not in enabled drivers build config 00:14:55.383 net/igc: not in enabled drivers build config 00:14:55.383 net/ionic: not in enabled drivers build config 00:14:55.383 net/ipn3ke: not in enabled drivers build config 00:14:55.383 net/ixgbe: not in enabled drivers build config 00:14:55.383 net/mana: not in enabled drivers build config 00:14:55.383 net/memif: not in enabled drivers build config 00:14:55.383 net/mlx4: not in enabled drivers build config 00:14:55.383 net/mlx5: not in enabled drivers build config 00:14:55.383 net/mvneta: not in enabled drivers build config 00:14:55.383 net/mvpp2: not in enabled drivers build config 00:14:55.383 net/netvsc: not in enabled drivers build config 00:14:55.383 net/nfb: not in enabled drivers build config 00:14:55.383 net/nfp: not in enabled drivers build config 00:14:55.383 net/ngbe: not in enabled drivers build config 00:14:55.383 net/null: not in enabled drivers build config 00:14:55.383 net/octeontx: not in enabled drivers build config 00:14:55.383 net/octeon_ep: not in enabled drivers build config 00:14:55.383 net/pcap: not in enabled drivers build config 00:14:55.383 net/pfe: not in enabled drivers build config 00:14:55.383 net/qede: not in enabled drivers build config 00:14:55.383 net/ring: not in enabled drivers build config 00:14:55.383 net/sfc: not in enabled drivers build config 00:14:55.383 net/softnic: not in enabled drivers build config 00:14:55.383 net/tap: not in enabled drivers build config 00:14:55.383 net/thunderx: not in enabled drivers build config 00:14:55.383 net/txgbe: not in enabled drivers build config 00:14:55.383 net/vdev_netvsc: not in enabled drivers build config 00:14:55.383 net/vhost: not in enabled drivers build config 00:14:55.383 net/virtio: not in enabled drivers build config 00:14:55.383 net/vmxnet3: not in enabled drivers build config 00:14:55.383 raw/*: missing internal dependency, "rawdev" 00:14:55.383 crypto/armv8: not in enabled drivers build config 00:14:55.383 crypto/bcmfs: not in enabled drivers build config 00:14:55.383 crypto/caam_jr: not in enabled drivers build config 00:14:55.383 crypto/ccp: not in enabled drivers build config 00:14:55.383 crypto/cnxk: not in enabled drivers build config 00:14:55.383 crypto/dpaa_sec: not in enabled drivers build config 00:14:55.383 crypto/dpaa2_sec: not in enabled drivers build config 00:14:55.383 crypto/ipsec_mb: not in enabled drivers build config 00:14:55.383 crypto/mlx5: not in enabled drivers build config 00:14:55.383 crypto/mvsam: not in enabled drivers build config 00:14:55.383 crypto/nitrox: not in enabled drivers build config 00:14:55.383 crypto/null: not in enabled drivers build config 00:14:55.383 crypto/octeontx: not in enabled drivers build config 00:14:55.383 crypto/openssl: not in enabled drivers build config 00:14:55.383 crypto/scheduler: not in enabled drivers build config 00:14:55.383 crypto/uadk: not in enabled drivers build config 00:14:55.383 crypto/virtio: not in enabled drivers build config 00:14:55.383 compress/isal: not in enabled drivers build config 00:14:55.383 compress/mlx5: not in enabled drivers build config 00:14:55.383 compress/octeontx: not in enabled drivers build config 00:14:55.383 compress/zlib: not in enabled drivers build config 00:14:55.383 regex/*: missing internal dependency, "regexdev" 00:14:55.383 ml/*: missing internal dependency, "mldev" 00:14:55.383 vdpa/ifc: not in enabled drivers build config 00:14:55.383 vdpa/mlx5: not in enabled drivers build config 00:14:55.383 vdpa/nfp: not in enabled drivers build config 00:14:55.383 vdpa/sfc: not in enabled drivers build config 00:14:55.383 event/*: missing internal dependency, "eventdev" 00:14:55.383 baseband/*: missing internal dependency, "bbdev" 00:14:55.383 gpu/*: missing internal dependency, "gpudev" 00:14:55.383 00:14:55.383 00:14:55.383 Build targets in project: 85 00:14:55.383 00:14:55.383 DPDK 23.11.0 00:14:55.383 00:14:55.383 User defined options 00:14:55.383 buildtype : debug 00:14:55.383 default_library : shared 00:14:55.383 libdir : lib 00:14:55.383 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:14:55.383 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:14:55.383 c_link_args : 00:14:55.383 cpu_instruction_set: native 00:14:55.383 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:14:55.384 disable_libs : acl,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:14:55.384 enable_docs : false 00:14:55.384 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:14:55.384 enable_kmods : false 00:14:55.384 tests : false 00:14:55.384 00:14:55.384 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:14:55.661 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:14:55.919 [1/265] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:14:55.919 [2/265] Linking static target lib/librte_kvargs.a 00:14:55.919 [3/265] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:14:55.919 [4/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:14:55.919 [5/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:14:55.919 [6/265] Compiling C object lib/librte_log.a.p/log_log.c.o 00:14:55.919 [7/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:14:55.919 [8/265] Linking static target lib/librte_log.a 00:14:55.919 [9/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:14:55.919 [10/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:14:56.486 [11/265] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:14:56.486 [12/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:14:56.486 [13/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:14:56.744 [14/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:14:56.744 [15/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:14:56.744 [16/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:14:56.744 [17/265] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:14:56.744 [18/265] Linking static target lib/librte_telemetry.a 00:14:56.744 [19/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:14:57.002 [20/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:14:57.003 [21/265] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:14:57.003 [22/265] Linking target lib/librte_log.so.24.0 00:14:57.003 [23/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:14:57.260 [24/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:14:57.260 [25/265] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:14:57.260 [26/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:14:57.260 [27/265] Linking target lib/librte_kvargs.so.24.0 00:14:57.517 [28/265] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:14:57.517 [29/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:14:57.517 [30/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:14:57.775 [31/265] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:14:57.775 [32/265] Linking target lib/librte_telemetry.so.24.0 00:14:57.775 [33/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:14:57.775 [34/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:14:57.775 [35/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:14:57.775 [36/265] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:14:57.775 [37/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:14:58.033 [38/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:14:58.033 [39/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:14:58.292 [40/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:14:58.292 [41/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:14:58.292 [42/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:14:58.292 [43/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:14:58.292 [44/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:14:58.550 [45/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:14:58.550 [46/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:14:58.810 [47/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:14:58.810 [48/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:14:58.810 [49/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:14:59.069 [50/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:14:59.069 [51/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:14:59.069 [52/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:14:59.069 [53/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:14:59.069 [54/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:14:59.069 [55/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:14:59.328 [56/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:14:59.328 [57/265] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:14:59.328 [58/265] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:14:59.587 [59/265] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:14:59.587 [60/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:14:59.587 [61/265] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:14:59.845 [62/265] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:14:59.845 [63/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:15:00.103 [64/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:15:00.103 [65/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:15:00.103 [66/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:15:00.103 [67/265] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:15:00.103 [68/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:15:00.362 [69/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:15:00.362 [70/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:15:00.362 [71/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:15:00.621 [72/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:15:00.621 [73/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:15:00.621 [74/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:15:00.621 [75/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:15:00.621 [76/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:15:00.880 [77/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:15:00.880 [78/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:15:00.880 [79/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:15:01.139 [80/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:15:01.139 [81/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:15:01.398 [82/265] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:15:01.398 [83/265] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:15:01.398 [84/265] Linking static target lib/librte_ring.a 00:15:01.398 [85/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:15:01.398 [86/265] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:15:01.661 [87/265] Linking static target lib/librte_eal.a 00:15:01.661 [88/265] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:15:01.661 [89/265] Linking static target lib/librte_rcu.a 00:15:01.919 [90/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:15:01.919 [91/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:15:01.919 [92/265] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:15:01.919 [93/265] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:15:01.919 [94/265] Linking static target lib/librte_mempool.a 00:15:01.919 [95/265] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:15:02.177 [96/265] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:15:02.435 [97/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:15:02.435 [98/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:15:02.694 [99/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:15:02.694 [100/265] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:15:02.694 [101/265] Linking static target lib/net/libnet_crc_avx512_lib.a 00:15:02.694 [102/265] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:15:02.694 [103/265] Linking static target lib/librte_mbuf.a 00:15:02.951 [104/265] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:15:03.210 [105/265] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:15:03.210 [106/265] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:15:03.210 [107/265] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:15:03.210 [108/265] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:15:03.210 [109/265] Linking static target lib/librte_net.a 00:15:03.469 [110/265] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:15:03.469 [111/265] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:15:03.469 [112/265] Linking static target lib/librte_meter.a 00:15:03.727 [113/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:15:03.727 [114/265] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:15:03.985 [115/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:15:03.985 [116/265] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:15:03.985 [117/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:15:03.985 [118/265] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:15:03.985 [119/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:15:04.919 [120/265] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:15:04.919 [121/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:15:04.919 [122/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:15:05.177 [123/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:15:05.177 [124/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:15:05.177 [125/265] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:15:05.177 [126/265] Linking static target lib/librte_pci.a 00:15:05.177 [127/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:15:05.435 [128/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:15:05.435 [129/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:15:05.435 [130/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:15:05.435 [131/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:15:05.693 [132/265] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:15:05.693 [133/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:15:05.693 [134/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:15:05.693 [135/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:15:05.693 [136/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:15:05.951 [137/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:15:05.951 [138/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:15:05.951 [139/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:15:05.951 [140/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:15:05.951 [141/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:15:05.951 [142/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:15:05.951 [143/265] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:15:05.951 [144/265] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:15:05.951 [145/265] Linking static target lib/librte_ethdev.a 00:15:06.517 [146/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:15:06.517 [147/265] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:15:06.517 [148/265] Linking static target lib/librte_cmdline.a 00:15:06.775 [149/265] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:15:06.775 [150/265] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:15:06.775 [151/265] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:15:07.040 [152/265] Linking static target lib/librte_timer.a 00:15:07.040 [153/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:15:07.040 [154/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:15:07.314 [155/265] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:15:07.314 [156/265] Linking static target lib/librte_hash.a 00:15:07.581 [157/265] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:15:07.581 [158/265] Linking static target lib/librte_compressdev.a 00:15:07.581 [159/265] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:15:07.581 [160/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:15:07.581 [161/265] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:15:07.838 [162/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:15:07.838 [163/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:15:08.095 [164/265] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:15:08.095 [165/265] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:15:08.095 [166/265] Linking static target lib/librte_dmadev.a 00:15:08.353 [167/265] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:15:08.353 [168/265] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:15:08.353 [169/265] Linking static target lib/librte_cryptodev.a 00:15:08.353 [170/265] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:15:08.353 [171/265] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:15:08.611 [172/265] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:15:08.611 [173/265] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:15:08.611 [174/265] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:15:08.611 [175/265] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:15:08.869 [176/265] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:15:09.127 [177/265] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:15:09.127 [178/265] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:15:09.127 [179/265] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:15:09.127 [180/265] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:15:09.127 [181/265] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:15:09.385 [182/265] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:15:09.385 [183/265] Linking static target lib/librte_power.a 00:15:09.385 [184/265] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:15:09.385 [185/265] Linking static target lib/librte_reorder.a 00:15:09.643 [186/265] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:15:09.901 [187/265] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:15:09.901 [188/265] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:15:09.901 [189/265] Linking static target lib/librte_security.a 00:15:09.901 [190/265] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:15:09.901 [191/265] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:15:10.159 [192/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:15:10.418 [193/265] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:15:10.418 [194/265] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:15:10.675 [195/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:15:10.676 [196/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:15:10.676 [197/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:15:10.933 [198/265] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:15:11.191 [199/265] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:15:11.191 [200/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:15:11.191 [201/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:15:11.191 [202/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:15:11.509 [203/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:15:11.509 [204/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:15:11.784 [205/265] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:15:11.784 [206/265] Linking static target drivers/libtmp_rte_bus_vdev.a 00:15:11.784 [207/265] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:15:11.784 [208/265] Linking static target drivers/libtmp_rte_bus_pci.a 00:15:11.784 [209/265] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:15:12.042 [210/265] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:15:12.042 [211/265] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:15:12.042 [212/265] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:15:12.042 [213/265] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:15:12.042 [214/265] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:15:12.042 [215/265] Linking static target drivers/librte_bus_vdev.a 00:15:12.042 [216/265] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:15:12.043 [217/265] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:15:12.043 [218/265] Linking static target drivers/libtmp_rte_mempool_ring.a 00:15:12.043 [219/265] Linking static target drivers/librte_bus_pci.a 00:15:12.301 [220/265] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:15:12.301 [221/265] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:15:12.301 [222/265] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:15:12.301 [223/265] Linking static target drivers/librte_mempool_ring.a 00:15:12.301 [224/265] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:15:12.560 [225/265] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:15:13.501 [226/265] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:15:13.501 [227/265] Linking static target lib/librte_vhost.a 00:15:14.065 [228/265] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:15:14.065 [229/265] Linking target lib/librte_eal.so.24.0 00:15:14.065 [230/265] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:15:14.322 [231/265] Linking target drivers/librte_bus_vdev.so.24.0 00:15:14.322 [232/265] Linking target lib/librte_meter.so.24.0 00:15:14.322 [233/265] Linking target lib/librte_pci.so.24.0 00:15:14.322 [234/265] Linking target lib/librte_ring.so.24.0 00:15:14.322 [235/265] Linking target lib/librte_timer.so.24.0 00:15:14.322 [236/265] Linking target lib/librte_dmadev.so.24.0 00:15:14.322 [237/265] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:15:14.322 [238/265] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:15:14.322 [239/265] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:15:14.322 [240/265] Linking target drivers/librte_bus_pci.so.24.0 00:15:14.322 [241/265] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:15:14.322 [242/265] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:15:14.579 [243/265] Linking target lib/librte_mempool.so.24.0 00:15:14.579 [244/265] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:15:14.579 [245/265] Linking target lib/librte_rcu.so.24.0 00:15:14.579 [246/265] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:15:14.579 [247/265] Linking target drivers/librte_mempool_ring.so.24.0 00:15:14.579 [248/265] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:15:14.579 [249/265] Linking target lib/librte_mbuf.so.24.0 00:15:14.836 [250/265] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:15:14.836 [251/265] Linking target lib/librte_net.so.24.0 00:15:14.836 [252/265] Linking target lib/librte_compressdev.so.24.0 00:15:14.836 [253/265] Linking target lib/librte_cryptodev.so.24.0 00:15:14.836 [254/265] Linking target lib/librte_reorder.so.24.0 00:15:14.836 [255/265] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:15:15.094 [256/265] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:15:15.094 [257/265] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:15:15.094 [258/265] Linking target lib/librte_cmdline.so.24.0 00:15:15.094 [259/265] Linking target lib/librte_security.so.24.0 00:15:15.094 [260/265] Linking target lib/librte_hash.so.24.0 00:15:15.094 [261/265] Linking target lib/librte_ethdev.so.24.0 00:15:15.094 [262/265] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:15:15.094 [263/265] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:15:15.351 [264/265] Linking target lib/librte_power.so.24.0 00:15:15.351 [265/265] Linking target lib/librte_vhost.so.24.0 00:15:15.351 INFO: autodetecting backend as ninja 00:15:15.351 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:15:17.916 CC lib/ut/ut.o 00:15:17.916 CC lib/ut_mock/mock.o 00:15:17.916 CC lib/log/log.o 00:15:17.916 CC lib/log/log_flags.o 00:15:17.916 CC lib/log/log_deprecated.o 00:15:17.916 LIB libspdk_ut_mock.a 00:15:17.916 SO libspdk_ut_mock.so.6.0 00:15:17.916 LIB libspdk_ut.a 00:15:17.916 LIB libspdk_log.a 00:15:17.916 SO libspdk_ut.so.2.0 00:15:17.916 SYMLINK libspdk_ut_mock.so 00:15:17.916 SO libspdk_log.so.7.0 00:15:17.916 SYMLINK libspdk_ut.so 00:15:17.916 SYMLINK libspdk_log.so 00:15:18.174 CC lib/ioat/ioat.o 00:15:18.174 CC lib/dma/dma.o 00:15:18.174 CC lib/util/base64.o 00:15:18.174 CC lib/util/bit_array.o 00:15:18.174 CC lib/util/cpuset.o 00:15:18.174 CC lib/util/crc16.o 00:15:18.174 CC lib/util/crc32.o 00:15:18.174 CC lib/util/crc32c.o 00:15:18.174 CXX lib/trace_parser/trace.o 00:15:18.433 CC lib/vfio_user/host/vfio_user_pci.o 00:15:18.433 CC lib/util/crc32_ieee.o 00:15:18.433 CC lib/vfio_user/host/vfio_user.o 00:15:18.433 CC lib/util/crc64.o 00:15:18.433 CC lib/util/dif.o 00:15:18.433 LIB libspdk_dma.a 00:15:18.433 CC lib/util/fd.o 00:15:18.433 CC lib/util/file.o 00:15:18.433 SO libspdk_dma.so.4.0 00:15:18.692 SYMLINK libspdk_dma.so 00:15:18.692 CC lib/util/hexlify.o 00:15:18.692 CC lib/util/iov.o 00:15:18.692 LIB libspdk_ioat.a 00:15:18.692 CC lib/util/math.o 00:15:18.692 SO libspdk_ioat.so.7.0 00:15:18.692 CC lib/util/pipe.o 00:15:18.692 CC lib/util/strerror_tls.o 00:15:18.692 CC lib/util/string.o 00:15:18.692 LIB libspdk_vfio_user.a 00:15:18.692 SYMLINK libspdk_ioat.so 00:15:18.692 CC lib/util/uuid.o 00:15:18.692 SO libspdk_vfio_user.so.5.0 00:15:18.692 CC lib/util/fd_group.o 00:15:18.692 CC lib/util/xor.o 00:15:18.692 CC lib/util/zipf.o 00:15:18.692 SYMLINK libspdk_vfio_user.so 00:15:18.950 LIB libspdk_util.a 00:15:19.210 SO libspdk_util.so.9.0 00:15:19.210 LIB libspdk_trace_parser.a 00:15:19.210 SO libspdk_trace_parser.so.5.0 00:15:19.467 SYMLINK libspdk_util.so 00:15:19.468 SYMLINK libspdk_trace_parser.so 00:15:19.468 CC lib/conf/conf.o 00:15:19.468 CC lib/idxd/idxd.o 00:15:19.468 CC lib/idxd/idxd_user.o 00:15:19.468 CC lib/json/json_parse.o 00:15:19.468 CC lib/json/json_util.o 00:15:19.468 CC lib/rdma/common.o 00:15:19.468 CC lib/json/json_write.o 00:15:19.468 CC lib/rdma/rdma_verbs.o 00:15:19.468 CC lib/vmd/vmd.o 00:15:19.468 CC lib/env_dpdk/env.o 00:15:19.726 LIB libspdk_conf.a 00:15:19.726 CC lib/vmd/led.o 00:15:19.726 CC lib/env_dpdk/memory.o 00:15:19.726 CC lib/env_dpdk/pci.o 00:15:19.726 SO libspdk_conf.so.6.0 00:15:19.726 CC lib/env_dpdk/init.o 00:15:19.726 LIB libspdk_rdma.a 00:15:19.726 LIB libspdk_json.a 00:15:19.984 SO libspdk_rdma.so.6.0 00:15:19.984 SO libspdk_json.so.6.0 00:15:19.984 SYMLINK libspdk_conf.so 00:15:19.984 CC lib/env_dpdk/threads.o 00:15:19.984 CC lib/env_dpdk/pci_ioat.o 00:15:19.984 SYMLINK libspdk_json.so 00:15:19.984 SYMLINK libspdk_rdma.so 00:15:19.984 CC lib/env_dpdk/pci_virtio.o 00:15:19.984 CC lib/env_dpdk/pci_vmd.o 00:15:19.984 LIB libspdk_idxd.a 00:15:19.984 CC lib/env_dpdk/pci_idxd.o 00:15:19.984 CC lib/env_dpdk/pci_event.o 00:15:19.984 SO libspdk_idxd.so.12.0 00:15:19.984 CC lib/env_dpdk/sigbus_handler.o 00:15:20.242 CC lib/env_dpdk/pci_dpdk.o 00:15:20.242 SYMLINK libspdk_idxd.so 00:15:20.242 CC lib/env_dpdk/pci_dpdk_2207.o 00:15:20.242 LIB libspdk_vmd.a 00:15:20.242 CC lib/env_dpdk/pci_dpdk_2211.o 00:15:20.242 SO libspdk_vmd.so.6.0 00:15:20.242 CC lib/jsonrpc/jsonrpc_server.o 00:15:20.242 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:15:20.242 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:15:20.242 CC lib/jsonrpc/jsonrpc_client.o 00:15:20.242 SYMLINK libspdk_vmd.so 00:15:20.530 LIB libspdk_jsonrpc.a 00:15:20.530 SO libspdk_jsonrpc.so.6.0 00:15:20.829 SYMLINK libspdk_jsonrpc.so 00:15:21.088 CC lib/rpc/rpc.o 00:15:21.088 LIB libspdk_env_dpdk.a 00:15:21.088 SO libspdk_env_dpdk.so.14.0 00:15:21.088 LIB libspdk_rpc.a 00:15:21.347 SO libspdk_rpc.so.6.0 00:15:21.347 SYMLINK libspdk_env_dpdk.so 00:15:21.347 SYMLINK libspdk_rpc.so 00:15:21.606 CC lib/trace/trace.o 00:15:21.606 CC lib/trace/trace_rpc.o 00:15:21.606 CC lib/trace/trace_flags.o 00:15:21.606 CC lib/keyring/keyring.o 00:15:21.606 CC lib/notify/notify.o 00:15:21.606 CC lib/notify/notify_rpc.o 00:15:21.606 CC lib/keyring/keyring_rpc.o 00:15:21.865 LIB libspdk_notify.a 00:15:21.865 SO libspdk_notify.so.6.0 00:15:21.865 LIB libspdk_keyring.a 00:15:21.865 LIB libspdk_trace.a 00:15:21.865 SYMLINK libspdk_notify.so 00:15:21.865 SO libspdk_keyring.so.1.0 00:15:21.865 SO libspdk_trace.so.10.0 00:15:21.865 SYMLINK libspdk_keyring.so 00:15:21.865 SYMLINK libspdk_trace.so 00:15:22.124 CC lib/sock/sock_rpc.o 00:15:22.124 CC lib/sock/sock.o 00:15:22.124 CC lib/thread/thread.o 00:15:22.124 CC lib/thread/iobuf.o 00:15:22.690 LIB libspdk_sock.a 00:15:22.690 SO libspdk_sock.so.9.0 00:15:22.690 SYMLINK libspdk_sock.so 00:15:22.956 CC lib/nvme/nvme_ctrlr_cmd.o 00:15:22.956 CC lib/nvme/nvme_ctrlr.o 00:15:22.956 CC lib/nvme/nvme_fabric.o 00:15:22.956 CC lib/nvme/nvme_ns_cmd.o 00:15:22.956 CC lib/nvme/nvme_pcie_common.o 00:15:22.956 CC lib/nvme/nvme_ns.o 00:15:22.956 CC lib/nvme/nvme_pcie.o 00:15:22.956 CC lib/nvme/nvme_qpair.o 00:15:22.956 CC lib/nvme/nvme.o 00:15:23.894 LIB libspdk_thread.a 00:15:23.894 SO libspdk_thread.so.10.0 00:15:23.894 CC lib/nvme/nvme_quirks.o 00:15:23.894 CC lib/nvme/nvme_transport.o 00:15:23.894 SYMLINK libspdk_thread.so 00:15:23.894 CC lib/nvme/nvme_discovery.o 00:15:23.894 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:15:23.894 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:15:23.894 CC lib/nvme/nvme_tcp.o 00:15:24.153 CC lib/nvme/nvme_opal.o 00:15:24.153 CC lib/nvme/nvme_io_msg.o 00:15:24.153 CC lib/nvme/nvme_poll_group.o 00:15:24.412 CC lib/nvme/nvme_zns.o 00:15:24.710 CC lib/nvme/nvme_stubs.o 00:15:24.710 CC lib/nvme/nvme_auth.o 00:15:24.710 CC lib/accel/accel.o 00:15:24.710 CC lib/blob/blobstore.o 00:15:24.710 CC lib/blob/request.o 00:15:24.969 CC lib/init/json_config.o 00:15:24.969 CC lib/blob/zeroes.o 00:15:24.969 CC lib/blob/blob_bs_dev.o 00:15:24.969 CC lib/init/subsystem.o 00:15:25.228 CC lib/nvme/nvme_cuse.o 00:15:25.228 CC lib/nvme/nvme_rdma.o 00:15:25.228 CC lib/virtio/virtio.o 00:15:25.228 CC lib/virtio/virtio_vhost_user.o 00:15:25.228 CC lib/init/subsystem_rpc.o 00:15:25.228 CC lib/accel/accel_rpc.o 00:15:25.486 CC lib/init/rpc.o 00:15:25.486 CC lib/virtio/virtio_vfio_user.o 00:15:25.486 CC lib/accel/accel_sw.o 00:15:25.486 CC lib/virtio/virtio_pci.o 00:15:25.745 LIB libspdk_init.a 00:15:25.745 SO libspdk_init.so.5.0 00:15:25.745 SYMLINK libspdk_init.so 00:15:25.745 LIB libspdk_virtio.a 00:15:26.004 SO libspdk_virtio.so.7.0 00:15:26.004 LIB libspdk_accel.a 00:15:26.004 SYMLINK libspdk_virtio.so 00:15:26.004 SO libspdk_accel.so.15.0 00:15:26.004 CC lib/event/app.o 00:15:26.004 CC lib/event/reactor.o 00:15:26.004 CC lib/event/log_rpc.o 00:15:26.004 CC lib/event/app_rpc.o 00:15:26.004 CC lib/event/scheduler_static.o 00:15:26.004 SYMLINK libspdk_accel.so 00:15:26.263 CC lib/bdev/bdev.o 00:15:26.263 CC lib/bdev/scsi_nvme.o 00:15:26.263 CC lib/bdev/bdev_rpc.o 00:15:26.263 CC lib/bdev/bdev_zone.o 00:15:26.263 CC lib/bdev/part.o 00:15:26.522 LIB libspdk_event.a 00:15:26.522 SO libspdk_event.so.13.0 00:15:26.522 SYMLINK libspdk_event.so 00:15:26.522 LIB libspdk_nvme.a 00:15:26.781 SO libspdk_nvme.so.13.0 00:15:27.348 SYMLINK libspdk_nvme.so 00:15:27.609 LIB libspdk_blob.a 00:15:27.609 SO libspdk_blob.so.11.0 00:15:27.870 SYMLINK libspdk_blob.so 00:15:28.128 CC lib/blobfs/blobfs.o 00:15:28.128 CC lib/blobfs/tree.o 00:15:28.128 CC lib/lvol/lvol.o 00:15:28.699 LIB libspdk_blobfs.a 00:15:28.957 SO libspdk_blobfs.so.10.0 00:15:28.957 LIB libspdk_lvol.a 00:15:28.957 SYMLINK libspdk_blobfs.so 00:15:28.957 SO libspdk_lvol.so.10.0 00:15:28.957 SYMLINK libspdk_lvol.so 00:15:29.215 LIB libspdk_bdev.a 00:15:29.215 SO libspdk_bdev.so.15.0 00:15:29.215 SYMLINK libspdk_bdev.so 00:15:29.472 CC lib/scsi/dev.o 00:15:29.472 CC lib/scsi/lun.o 00:15:29.472 CC lib/scsi/port.o 00:15:29.472 CC lib/nbd/nbd.o 00:15:29.472 CC lib/nvmf/ctrlr.o 00:15:29.472 CC lib/scsi/scsi.o 00:15:29.472 CC lib/scsi/scsi_bdev.o 00:15:29.472 CC lib/nvmf/ctrlr_discovery.o 00:15:29.472 CC lib/ublk/ublk.o 00:15:29.472 CC lib/ftl/ftl_core.o 00:15:29.730 CC lib/nvmf/ctrlr_bdev.o 00:15:29.730 CC lib/nvmf/subsystem.o 00:15:29.730 CC lib/nvmf/nvmf.o 00:15:29.988 CC lib/nvmf/nvmf_rpc.o 00:15:29.988 CC lib/ftl/ftl_init.o 00:15:29.988 CC lib/scsi/scsi_pr.o 00:15:29.988 CC lib/nbd/nbd_rpc.o 00:15:29.988 CC lib/ftl/ftl_layout.o 00:15:30.245 CC lib/ublk/ublk_rpc.o 00:15:30.245 CC lib/scsi/scsi_rpc.o 00:15:30.245 LIB libspdk_nbd.a 00:15:30.245 SO libspdk_nbd.so.7.0 00:15:30.245 SYMLINK libspdk_nbd.so 00:15:30.245 CC lib/scsi/task.o 00:15:30.245 LIB libspdk_ublk.a 00:15:30.504 CC lib/nvmf/transport.o 00:15:30.504 CC lib/nvmf/tcp.o 00:15:30.504 CC lib/ftl/ftl_debug.o 00:15:30.504 SO libspdk_ublk.so.3.0 00:15:30.504 CC lib/ftl/ftl_io.o 00:15:30.504 SYMLINK libspdk_ublk.so 00:15:30.504 CC lib/ftl/ftl_sb.o 00:15:30.761 CC lib/ftl/ftl_l2p.o 00:15:30.761 LIB libspdk_scsi.a 00:15:30.761 CC lib/ftl/ftl_l2p_flat.o 00:15:30.761 SO libspdk_scsi.so.9.0 00:15:30.761 CC lib/nvmf/rdma.o 00:15:30.761 CC lib/ftl/ftl_nv_cache.o 00:15:30.761 CC lib/ftl/ftl_band.o 00:15:31.019 SYMLINK libspdk_scsi.so 00:15:31.019 CC lib/ftl/ftl_band_ops.o 00:15:31.019 CC lib/ftl/ftl_writer.o 00:15:31.019 CC lib/ftl/ftl_rq.o 00:15:31.019 CC lib/ftl/ftl_reloc.o 00:15:31.321 CC lib/iscsi/conn.o 00:15:31.321 CC lib/ftl/ftl_l2p_cache.o 00:15:31.321 CC lib/ftl/ftl_p2l.o 00:15:31.321 CC lib/ftl/mngt/ftl_mngt.o 00:15:31.321 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:15:31.321 CC lib/vhost/vhost.o 00:15:31.579 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:15:31.579 CC lib/vhost/vhost_rpc.o 00:15:31.579 CC lib/vhost/vhost_scsi.o 00:15:31.579 CC lib/vhost/vhost_blk.o 00:15:31.579 CC lib/ftl/mngt/ftl_mngt_startup.o 00:15:31.837 CC lib/vhost/rte_vhost_user.o 00:15:31.837 CC lib/ftl/mngt/ftl_mngt_md.o 00:15:31.837 CC lib/iscsi/init_grp.o 00:15:31.837 CC lib/ftl/mngt/ftl_mngt_misc.o 00:15:32.095 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:15:32.095 CC lib/iscsi/iscsi.o 00:15:32.095 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:15:32.095 CC lib/ftl/mngt/ftl_mngt_band.o 00:15:32.095 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:15:32.352 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:15:32.352 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:15:32.352 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:15:32.352 CC lib/ftl/utils/ftl_conf.o 00:15:32.352 CC lib/ftl/utils/ftl_md.o 00:15:32.610 CC lib/ftl/utils/ftl_mempool.o 00:15:32.610 CC lib/ftl/utils/ftl_bitmap.o 00:15:32.610 CC lib/ftl/utils/ftl_property.o 00:15:32.610 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:15:32.610 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:15:32.610 CC lib/iscsi/md5.o 00:15:32.610 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:15:32.610 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:15:32.868 CC lib/iscsi/param.o 00:15:32.868 LIB libspdk_vhost.a 00:15:32.868 CC lib/iscsi/portal_grp.o 00:15:32.868 LIB libspdk_nvmf.a 00:15:32.868 CC lib/iscsi/tgt_node.o 00:15:32.868 SO libspdk_vhost.so.8.0 00:15:32.868 CC lib/iscsi/iscsi_subsystem.o 00:15:32.868 CC lib/iscsi/iscsi_rpc.o 00:15:32.868 CC lib/iscsi/task.o 00:15:33.126 SO libspdk_nvmf.so.18.0 00:15:33.126 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:15:33.126 SYMLINK libspdk_vhost.so 00:15:33.126 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:15:33.126 CC lib/ftl/upgrade/ftl_sb_v3.o 00:15:33.126 CC lib/ftl/upgrade/ftl_sb_v5.o 00:15:33.126 CC lib/ftl/nvc/ftl_nvc_dev.o 00:15:33.126 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:15:33.126 SYMLINK libspdk_nvmf.so 00:15:33.126 CC lib/ftl/base/ftl_base_dev.o 00:15:33.383 CC lib/ftl/base/ftl_base_bdev.o 00:15:33.383 CC lib/ftl/ftl_trace.o 00:15:33.383 LIB libspdk_iscsi.a 00:15:33.640 SO libspdk_iscsi.so.8.0 00:15:33.640 LIB libspdk_ftl.a 00:15:33.640 SYMLINK libspdk_iscsi.so 00:15:33.898 SO libspdk_ftl.so.9.0 00:15:34.155 SYMLINK libspdk_ftl.so 00:15:34.739 CC module/env_dpdk/env_dpdk_rpc.o 00:15:34.739 CC module/blob/bdev/blob_bdev.o 00:15:34.739 CC module/accel/dsa/accel_dsa.o 00:15:34.739 CC module/keyring/file/keyring.o 00:15:34.739 CC module/accel/ioat/accel_ioat.o 00:15:34.739 CC module/accel/iaa/accel_iaa.o 00:15:34.739 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:15:34.739 CC module/sock/posix/posix.o 00:15:34.739 CC module/accel/error/accel_error.o 00:15:34.739 CC module/scheduler/dynamic/scheduler_dynamic.o 00:15:34.739 LIB libspdk_env_dpdk_rpc.a 00:15:34.739 SO libspdk_env_dpdk_rpc.so.6.0 00:15:34.739 LIB libspdk_scheduler_dpdk_governor.a 00:15:34.739 CC module/keyring/file/keyring_rpc.o 00:15:34.739 SYMLINK libspdk_env_dpdk_rpc.so 00:15:34.739 SO libspdk_scheduler_dpdk_governor.so.4.0 00:15:34.739 CC module/accel/ioat/accel_ioat_rpc.o 00:15:34.739 CC module/accel/iaa/accel_iaa_rpc.o 00:15:34.739 LIB libspdk_scheduler_dynamic.a 00:15:34.996 CC module/accel/error/accel_error_rpc.o 00:15:34.996 CC module/accel/dsa/accel_dsa_rpc.o 00:15:34.996 SO libspdk_scheduler_dynamic.so.4.0 00:15:34.996 SYMLINK libspdk_scheduler_dpdk_governor.so 00:15:34.996 SYMLINK libspdk_scheduler_dynamic.so 00:15:34.996 LIB libspdk_blob_bdev.a 00:15:34.996 LIB libspdk_keyring_file.a 00:15:34.996 SO libspdk_blob_bdev.so.11.0 00:15:34.996 LIB libspdk_accel_iaa.a 00:15:34.996 LIB libspdk_accel_ioat.a 00:15:34.996 CC module/scheduler/gscheduler/gscheduler.o 00:15:34.996 SO libspdk_keyring_file.so.1.0 00:15:34.996 LIB libspdk_accel_error.a 00:15:34.996 SO libspdk_accel_iaa.so.3.0 00:15:34.996 LIB libspdk_accel_dsa.a 00:15:34.996 SYMLINK libspdk_blob_bdev.so 00:15:34.996 SO libspdk_accel_ioat.so.6.0 00:15:34.996 SO libspdk_accel_error.so.2.0 00:15:34.996 SO libspdk_accel_dsa.so.5.0 00:15:34.996 SYMLINK libspdk_accel_iaa.so 00:15:34.996 SYMLINK libspdk_keyring_file.so 00:15:34.996 SYMLINK libspdk_accel_ioat.so 00:15:34.996 SYMLINK libspdk_accel_error.so 00:15:35.253 SYMLINK libspdk_accel_dsa.so 00:15:35.253 LIB libspdk_scheduler_gscheduler.a 00:15:35.253 SO libspdk_scheduler_gscheduler.so.4.0 00:15:35.253 SYMLINK libspdk_scheduler_gscheduler.so 00:15:35.253 CC module/bdev/error/vbdev_error.o 00:15:35.253 CC module/bdev/malloc/bdev_malloc.o 00:15:35.253 CC module/bdev/lvol/vbdev_lvol.o 00:15:35.253 CC module/bdev/null/bdev_null.o 00:15:35.253 CC module/blobfs/bdev/blobfs_bdev.o 00:15:35.253 CC module/bdev/gpt/gpt.o 00:15:35.253 CC module/bdev/nvme/bdev_nvme.o 00:15:35.253 CC module/bdev/delay/vbdev_delay.o 00:15:35.510 LIB libspdk_sock_posix.a 00:15:35.511 SO libspdk_sock_posix.so.6.0 00:15:35.511 SYMLINK libspdk_sock_posix.so 00:15:35.511 CC module/bdev/passthru/vbdev_passthru.o 00:15:35.511 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:15:35.511 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:15:35.511 CC module/bdev/gpt/vbdev_gpt.o 00:15:35.511 CC module/bdev/null/bdev_null_rpc.o 00:15:35.768 CC module/bdev/error/vbdev_error_rpc.o 00:15:35.768 CC module/bdev/nvme/bdev_nvme_rpc.o 00:15:35.768 CC module/bdev/malloc/bdev_malloc_rpc.o 00:15:35.768 LIB libspdk_blobfs_bdev.a 00:15:35.768 SO libspdk_blobfs_bdev.so.6.0 00:15:35.768 CC module/bdev/delay/vbdev_delay_rpc.o 00:15:35.768 LIB libspdk_bdev_null.a 00:15:35.768 LIB libspdk_bdev_passthru.a 00:15:35.768 LIB libspdk_bdev_error.a 00:15:35.768 SO libspdk_bdev_null.so.6.0 00:15:35.768 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:15:35.768 SYMLINK libspdk_blobfs_bdev.so 00:15:35.768 SO libspdk_bdev_passthru.so.6.0 00:15:35.768 SO libspdk_bdev_error.so.6.0 00:15:35.768 LIB libspdk_bdev_malloc.a 00:15:36.026 LIB libspdk_bdev_gpt.a 00:15:36.026 SYMLINK libspdk_bdev_null.so 00:15:36.026 SYMLINK libspdk_bdev_passthru.so 00:15:36.026 SO libspdk_bdev_malloc.so.6.0 00:15:36.026 SO libspdk_bdev_gpt.so.6.0 00:15:36.026 SYMLINK libspdk_bdev_error.so 00:15:36.026 LIB libspdk_bdev_delay.a 00:15:36.026 SO libspdk_bdev_delay.so.6.0 00:15:36.026 SYMLINK libspdk_bdev_malloc.so 00:15:36.026 SYMLINK libspdk_bdev_gpt.so 00:15:36.026 CC module/bdev/raid/bdev_raid.o 00:15:36.026 SYMLINK libspdk_bdev_delay.so 00:15:36.026 CC module/bdev/split/vbdev_split.o 00:15:36.026 CC module/bdev/split/vbdev_split_rpc.o 00:15:36.026 CC module/bdev/zone_block/vbdev_zone_block.o 00:15:36.026 CC module/bdev/aio/bdev_aio.o 00:15:36.284 LIB libspdk_bdev_lvol.a 00:15:36.284 CC module/bdev/iscsi/bdev_iscsi.o 00:15:36.284 SO libspdk_bdev_lvol.so.6.0 00:15:36.284 CC module/bdev/ftl/bdev_ftl.o 00:15:36.284 SYMLINK libspdk_bdev_lvol.so 00:15:36.284 CC module/bdev/ftl/bdev_ftl_rpc.o 00:15:36.284 LIB libspdk_bdev_split.a 00:15:36.284 SO libspdk_bdev_split.so.6.0 00:15:36.542 CC module/bdev/virtio/bdev_virtio_scsi.o 00:15:36.542 CC module/bdev/virtio/bdev_virtio_blk.o 00:15:36.542 SYMLINK libspdk_bdev_split.so 00:15:36.542 CC module/bdev/aio/bdev_aio_rpc.o 00:15:36.542 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:15:36.542 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:15:36.542 CC module/bdev/raid/bdev_raid_rpc.o 00:15:36.542 LIB libspdk_bdev_ftl.a 00:15:36.542 CC module/bdev/virtio/bdev_virtio_rpc.o 00:15:36.542 SO libspdk_bdev_ftl.so.6.0 00:15:36.542 LIB libspdk_bdev_aio.a 00:15:36.542 LIB libspdk_bdev_zone_block.a 00:15:36.542 LIB libspdk_bdev_iscsi.a 00:15:36.542 SO libspdk_bdev_aio.so.6.0 00:15:36.800 SYMLINK libspdk_bdev_ftl.so 00:15:36.800 SO libspdk_bdev_iscsi.so.6.0 00:15:36.800 SO libspdk_bdev_zone_block.so.6.0 00:15:36.800 SYMLINK libspdk_bdev_aio.so 00:15:36.800 CC module/bdev/raid/bdev_raid_sb.o 00:15:36.800 CC module/bdev/raid/raid0.o 00:15:36.800 CC module/bdev/nvme/nvme_rpc.o 00:15:36.800 CC module/bdev/nvme/bdev_mdns_client.o 00:15:36.800 SYMLINK libspdk_bdev_zone_block.so 00:15:36.800 CC module/bdev/nvme/vbdev_opal.o 00:15:36.800 SYMLINK libspdk_bdev_iscsi.so 00:15:36.800 CC module/bdev/nvme/vbdev_opal_rpc.o 00:15:36.800 CC module/bdev/raid/raid1.o 00:15:37.059 LIB libspdk_bdev_virtio.a 00:15:37.059 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:15:37.059 SO libspdk_bdev_virtio.so.6.0 00:15:37.059 CC module/bdev/raid/concat.o 00:15:37.059 SYMLINK libspdk_bdev_virtio.so 00:15:37.317 LIB libspdk_bdev_raid.a 00:15:37.317 SO libspdk_bdev_raid.so.6.0 00:15:37.317 SYMLINK libspdk_bdev_raid.so 00:15:37.883 LIB libspdk_bdev_nvme.a 00:15:37.883 SO libspdk_bdev_nvme.so.7.0 00:15:37.883 SYMLINK libspdk_bdev_nvme.so 00:15:38.448 CC module/event/subsystems/sock/sock.o 00:15:38.448 CC module/event/subsystems/scheduler/scheduler.o 00:15:38.448 CC module/event/subsystems/iobuf/iobuf.o 00:15:38.448 CC module/event/subsystems/keyring/keyring.o 00:15:38.448 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:15:38.448 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:15:38.448 CC module/event/subsystems/vmd/vmd_rpc.o 00:15:38.448 CC module/event/subsystems/vmd/vmd.o 00:15:38.705 LIB libspdk_event_keyring.a 00:15:38.705 LIB libspdk_event_scheduler.a 00:15:38.705 LIB libspdk_event_vhost_blk.a 00:15:38.705 LIB libspdk_event_sock.a 00:15:38.705 LIB libspdk_event_iobuf.a 00:15:38.705 SO libspdk_event_keyring.so.1.0 00:15:38.705 SO libspdk_event_scheduler.so.4.0 00:15:38.706 SO libspdk_event_vhost_blk.so.3.0 00:15:38.706 SO libspdk_event_sock.so.5.0 00:15:38.706 SO libspdk_event_iobuf.so.3.0 00:15:38.706 LIB libspdk_event_vmd.a 00:15:38.706 SYMLINK libspdk_event_keyring.so 00:15:38.706 SYMLINK libspdk_event_sock.so 00:15:38.706 SYMLINK libspdk_event_vhost_blk.so 00:15:38.706 SYMLINK libspdk_event_scheduler.so 00:15:38.706 SO libspdk_event_vmd.so.6.0 00:15:38.706 SYMLINK libspdk_event_iobuf.so 00:15:38.706 SYMLINK libspdk_event_vmd.so 00:15:38.964 CC module/event/subsystems/accel/accel.o 00:15:39.222 LIB libspdk_event_accel.a 00:15:39.222 SO libspdk_event_accel.so.6.0 00:15:39.222 SYMLINK libspdk_event_accel.so 00:15:39.480 CC module/event/subsystems/bdev/bdev.o 00:15:39.738 LIB libspdk_event_bdev.a 00:15:39.738 SO libspdk_event_bdev.so.6.0 00:15:39.996 SYMLINK libspdk_event_bdev.so 00:15:39.996 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:15:39.996 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:15:39.996 CC module/event/subsystems/nbd/nbd.o 00:15:39.996 CC module/event/subsystems/ublk/ublk.o 00:15:39.996 CC module/event/subsystems/scsi/scsi.o 00:15:40.253 LIB libspdk_event_nbd.a 00:15:40.253 LIB libspdk_event_ublk.a 00:15:40.253 SO libspdk_event_nbd.so.6.0 00:15:40.253 LIB libspdk_event_scsi.a 00:15:40.253 SO libspdk_event_ublk.so.3.0 00:15:40.253 SO libspdk_event_scsi.so.6.0 00:15:40.253 LIB libspdk_event_nvmf.a 00:15:40.253 SYMLINK libspdk_event_ublk.so 00:15:40.253 SYMLINK libspdk_event_nbd.so 00:15:40.511 SO libspdk_event_nvmf.so.6.0 00:15:40.511 SYMLINK libspdk_event_scsi.so 00:15:40.511 SYMLINK libspdk_event_nvmf.so 00:15:40.511 CC module/event/subsystems/iscsi/iscsi.o 00:15:40.511 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:15:40.770 LIB libspdk_event_vhost_scsi.a 00:15:40.770 LIB libspdk_event_iscsi.a 00:15:41.028 SO libspdk_event_vhost_scsi.so.3.0 00:15:41.028 SO libspdk_event_iscsi.so.6.0 00:15:41.028 SYMLINK libspdk_event_vhost_scsi.so 00:15:41.028 SYMLINK libspdk_event_iscsi.so 00:15:41.287 SO libspdk.so.6.0 00:15:41.287 SYMLINK libspdk.so 00:15:41.287 CC app/spdk_nvme_perf/perf.o 00:15:41.287 CC app/trace_record/trace_record.o 00:15:41.287 CXX app/trace/trace.o 00:15:41.543 CC app/spdk_lspci/spdk_lspci.o 00:15:41.543 CC app/spdk_nvme_identify/identify.o 00:15:41.543 CC app/nvmf_tgt/nvmf_main.o 00:15:41.543 CC app/iscsi_tgt/iscsi_tgt.o 00:15:41.543 CC app/spdk_tgt/spdk_tgt.o 00:15:41.543 CC examples/accel/perf/accel_perf.o 00:15:41.543 CC test/accel/dif/dif.o 00:15:41.543 LINK spdk_lspci 00:15:41.543 LINK nvmf_tgt 00:15:41.800 LINK spdk_trace_record 00:15:41.800 LINK iscsi_tgt 00:15:41.800 LINK spdk_tgt 00:15:41.800 LINK spdk_trace 00:15:42.057 LINK dif 00:15:42.057 CC test/app/histogram_perf/histogram_perf.o 00:15:42.057 CC test/app/bdev_svc/bdev_svc.o 00:15:42.057 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:15:42.315 LINK histogram_perf 00:15:42.315 LINK accel_perf 00:15:42.315 LINK bdev_svc 00:15:42.315 CC test/app/jsoncat/jsoncat.o 00:15:42.315 CC test/bdev/bdevio/bdevio.o 00:15:42.315 LINK spdk_nvme_perf 00:15:42.315 CC app/spdk_nvme_discover/discovery_aer.o 00:15:42.315 LINK spdk_nvme_identify 00:15:42.574 CC app/spdk_top/spdk_top.o 00:15:42.574 LINK jsoncat 00:15:42.574 LINK nvme_fuzz 00:15:42.574 LINK spdk_nvme_discover 00:15:42.574 CC app/vhost/vhost.o 00:15:42.831 CC app/spdk_dd/spdk_dd.o 00:15:42.831 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:15:42.831 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:15:42.831 CC examples/bdev/hello_world/hello_bdev.o 00:15:42.831 LINK bdevio 00:15:42.831 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:15:42.831 LINK vhost 00:15:43.088 LINK hello_bdev 00:15:43.088 CC app/fio/nvme/fio_plugin.o 00:15:43.088 CC examples/blob/hello_world/hello_blob.o 00:15:43.088 CC test/blobfs/mkfs/mkfs.o 00:15:43.345 LINK vhost_fuzz 00:15:43.345 LINK spdk_dd 00:15:43.345 TEST_HEADER include/spdk/accel.h 00:15:43.345 TEST_HEADER include/spdk/accel_module.h 00:15:43.345 TEST_HEADER include/spdk/assert.h 00:15:43.345 TEST_HEADER include/spdk/barrier.h 00:15:43.345 TEST_HEADER include/spdk/base64.h 00:15:43.345 TEST_HEADER include/spdk/bdev.h 00:15:43.345 TEST_HEADER include/spdk/bdev_module.h 00:15:43.345 TEST_HEADER include/spdk/bdev_zone.h 00:15:43.345 TEST_HEADER include/spdk/bit_array.h 00:15:43.345 TEST_HEADER include/spdk/bit_pool.h 00:15:43.345 TEST_HEADER include/spdk/blob_bdev.h 00:15:43.345 TEST_HEADER include/spdk/blobfs_bdev.h 00:15:43.345 TEST_HEADER include/spdk/blobfs.h 00:15:43.346 TEST_HEADER include/spdk/blob.h 00:15:43.346 TEST_HEADER include/spdk/conf.h 00:15:43.346 TEST_HEADER include/spdk/config.h 00:15:43.346 TEST_HEADER include/spdk/cpuset.h 00:15:43.346 TEST_HEADER include/spdk/crc16.h 00:15:43.346 TEST_HEADER include/spdk/crc32.h 00:15:43.346 TEST_HEADER include/spdk/crc64.h 00:15:43.346 TEST_HEADER include/spdk/dif.h 00:15:43.346 TEST_HEADER include/spdk/dma.h 00:15:43.346 TEST_HEADER include/spdk/endian.h 00:15:43.346 TEST_HEADER include/spdk/env_dpdk.h 00:15:43.346 TEST_HEADER include/spdk/env.h 00:15:43.346 TEST_HEADER include/spdk/event.h 00:15:43.346 TEST_HEADER include/spdk/fd_group.h 00:15:43.346 TEST_HEADER include/spdk/fd.h 00:15:43.346 TEST_HEADER include/spdk/file.h 00:15:43.346 TEST_HEADER include/spdk/ftl.h 00:15:43.346 TEST_HEADER include/spdk/gpt_spec.h 00:15:43.346 TEST_HEADER include/spdk/hexlify.h 00:15:43.346 TEST_HEADER include/spdk/histogram_data.h 00:15:43.346 TEST_HEADER include/spdk/idxd.h 00:15:43.346 TEST_HEADER include/spdk/idxd_spec.h 00:15:43.346 TEST_HEADER include/spdk/init.h 00:15:43.346 TEST_HEADER include/spdk/ioat.h 00:15:43.346 TEST_HEADER include/spdk/ioat_spec.h 00:15:43.346 TEST_HEADER include/spdk/iscsi_spec.h 00:15:43.346 CC app/fio/bdev/fio_plugin.o 00:15:43.346 TEST_HEADER include/spdk/json.h 00:15:43.346 TEST_HEADER include/spdk/jsonrpc.h 00:15:43.346 TEST_HEADER include/spdk/keyring.h 00:15:43.346 TEST_HEADER include/spdk/keyring_module.h 00:15:43.346 TEST_HEADER include/spdk/likely.h 00:15:43.346 TEST_HEADER include/spdk/log.h 00:15:43.346 LINK hello_blob 00:15:43.346 TEST_HEADER include/spdk/lvol.h 00:15:43.346 TEST_HEADER include/spdk/memory.h 00:15:43.346 TEST_HEADER include/spdk/mmio.h 00:15:43.346 TEST_HEADER include/spdk/nbd.h 00:15:43.346 TEST_HEADER include/spdk/notify.h 00:15:43.346 TEST_HEADER include/spdk/nvme.h 00:15:43.346 TEST_HEADER include/spdk/nvme_intel.h 00:15:43.346 TEST_HEADER include/spdk/nvme_ocssd.h 00:15:43.346 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:15:43.346 TEST_HEADER include/spdk/nvme_spec.h 00:15:43.346 TEST_HEADER include/spdk/nvme_zns.h 00:15:43.346 TEST_HEADER include/spdk/nvmf_cmd.h 00:15:43.346 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:15:43.346 TEST_HEADER include/spdk/nvmf.h 00:15:43.346 TEST_HEADER include/spdk/nvmf_spec.h 00:15:43.346 TEST_HEADER include/spdk/nvmf_transport.h 00:15:43.346 TEST_HEADER include/spdk/opal.h 00:15:43.346 TEST_HEADER include/spdk/opal_spec.h 00:15:43.346 TEST_HEADER include/spdk/pci_ids.h 00:15:43.346 TEST_HEADER include/spdk/pipe.h 00:15:43.346 TEST_HEADER include/spdk/queue.h 00:15:43.346 TEST_HEADER include/spdk/reduce.h 00:15:43.346 TEST_HEADER include/spdk/rpc.h 00:15:43.346 TEST_HEADER include/spdk/scheduler.h 00:15:43.346 TEST_HEADER include/spdk/scsi.h 00:15:43.346 LINK mkfs 00:15:43.346 TEST_HEADER include/spdk/scsi_spec.h 00:15:43.346 TEST_HEADER include/spdk/sock.h 00:15:43.346 TEST_HEADER include/spdk/stdinc.h 00:15:43.346 TEST_HEADER include/spdk/string.h 00:15:43.346 TEST_HEADER include/spdk/thread.h 00:15:43.346 TEST_HEADER include/spdk/trace.h 00:15:43.346 TEST_HEADER include/spdk/trace_parser.h 00:15:43.346 TEST_HEADER include/spdk/tree.h 00:15:43.346 TEST_HEADER include/spdk/ublk.h 00:15:43.346 TEST_HEADER include/spdk/util.h 00:15:43.346 TEST_HEADER include/spdk/uuid.h 00:15:43.346 TEST_HEADER include/spdk/version.h 00:15:43.346 TEST_HEADER include/spdk/vfio_user_pci.h 00:15:43.346 TEST_HEADER include/spdk/vfio_user_spec.h 00:15:43.346 TEST_HEADER include/spdk/vhost.h 00:15:43.346 TEST_HEADER include/spdk/vmd.h 00:15:43.346 TEST_HEADER include/spdk/xor.h 00:15:43.346 TEST_HEADER include/spdk/zipf.h 00:15:43.346 CXX test/cpp_headers/accel.o 00:15:43.603 CXX test/cpp_headers/accel_module.o 00:15:43.603 LINK spdk_top 00:15:43.603 CC examples/bdev/bdevperf/bdevperf.o 00:15:43.604 LINK spdk_nvme 00:15:43.604 CXX test/cpp_headers/assert.o 00:15:43.604 CC examples/blob/cli/blobcli.o 00:15:43.862 CXX test/cpp_headers/barrier.o 00:15:43.862 CC examples/ioat/perf/perf.o 00:15:43.862 CC examples/ioat/verify/verify.o 00:15:43.862 LINK spdk_bdev 00:15:43.862 CC examples/nvme/hello_world/hello_world.o 00:15:43.862 CC examples/sock/hello_world/hello_sock.o 00:15:44.119 CC test/app/stub/stub.o 00:15:44.119 LINK ioat_perf 00:15:44.119 CXX test/cpp_headers/base64.o 00:15:44.119 CXX test/cpp_headers/bdev.o 00:15:44.119 LINK hello_world 00:15:44.119 LINK verify 00:15:44.119 LINK hello_sock 00:15:44.119 CXX test/cpp_headers/bdev_module.o 00:15:44.377 LINK stub 00:15:44.377 LINK bdevperf 00:15:44.377 CXX test/cpp_headers/bdev_zone.o 00:15:44.633 CC examples/nvme/reconnect/reconnect.o 00:15:44.633 LINK blobcli 00:15:44.633 CC examples/util/zipf/zipf.o 00:15:44.633 CC examples/vmd/lsvmd/lsvmd.o 00:15:44.634 CC examples/nvme/nvme_manage/nvme_manage.o 00:15:44.634 CC examples/nvmf/nvmf/nvmf.o 00:15:44.634 CXX test/cpp_headers/bit_array.o 00:15:44.634 CC examples/thread/thread/thread_ex.o 00:15:44.634 LINK lsvmd 00:15:44.634 LINK zipf 00:15:44.892 LINK iscsi_fuzz 00:15:44.892 CC examples/vmd/led/led.o 00:15:44.892 LINK reconnect 00:15:44.892 CXX test/cpp_headers/bit_pool.o 00:15:44.892 CXX test/cpp_headers/blob_bdev.o 00:15:44.892 LINK thread 00:15:44.892 CC examples/nvme/arbitration/arbitration.o 00:15:45.150 LINK led 00:15:45.150 CC examples/idxd/perf/perf.o 00:15:45.150 LINK nvmf 00:15:45.150 CXX test/cpp_headers/blobfs_bdev.o 00:15:45.407 LINK nvme_manage 00:15:45.407 CC test/dma/test_dma/test_dma.o 00:15:45.407 CC examples/interrupt_tgt/interrupt_tgt.o 00:15:45.407 CC examples/nvme/hotplug/hotplug.o 00:15:45.665 LINK arbitration 00:15:45.665 CXX test/cpp_headers/blobfs.o 00:15:45.665 CC test/event/event_perf/event_perf.o 00:15:45.665 CC test/env/mem_callbacks/mem_callbacks.o 00:15:45.665 CC examples/nvme/cmb_copy/cmb_copy.o 00:15:45.665 LINK interrupt_tgt 00:15:45.665 LINK hotplug 00:15:45.665 LINK idxd_perf 00:15:45.665 LINK event_perf 00:15:45.665 CXX test/cpp_headers/blob.o 00:15:45.923 LINK test_dma 00:15:45.923 LINK cmb_copy 00:15:45.923 CC test/env/vtophys/vtophys.o 00:15:45.923 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:15:45.923 CXX test/cpp_headers/conf.o 00:15:45.923 CC test/event/reactor/reactor.o 00:15:45.923 CC test/event/reactor_perf/reactor_perf.o 00:15:46.181 CC test/event/app_repeat/app_repeat.o 00:15:46.181 LINK vtophys 00:15:46.181 LINK reactor_perf 00:15:46.181 LINK reactor 00:15:46.181 LINK env_dpdk_post_init 00:15:46.181 CXX test/cpp_headers/config.o 00:15:46.181 CC examples/nvme/abort/abort.o 00:15:46.181 CXX test/cpp_headers/cpuset.o 00:15:46.181 LINK app_repeat 00:15:46.181 LINK mem_callbacks 00:15:46.439 CXX test/cpp_headers/crc16.o 00:15:46.439 CC test/event/scheduler/scheduler.o 00:15:46.439 CC test/lvol/esnap/esnap.o 00:15:46.439 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:15:46.439 CXX test/cpp_headers/crc32.o 00:15:46.439 CC test/env/memory/memory_ut.o 00:15:46.699 CC test/env/pci/pci_ut.o 00:15:46.699 CXX test/cpp_headers/crc64.o 00:15:46.699 LINK abort 00:15:46.699 LINK scheduler 00:15:46.699 LINK pmr_persistence 00:15:46.957 CXX test/cpp_headers/dif.o 00:15:46.957 CC test/nvme/aer/aer.o 00:15:46.957 CC test/rpc_client/rpc_client_test.o 00:15:46.957 CXX test/cpp_headers/dma.o 00:15:46.957 LINK pci_ut 00:15:46.957 LINK rpc_client_test 00:15:47.215 CC test/thread/poller_perf/poller_perf.o 00:15:47.215 CC test/nvme/reset/reset.o 00:15:47.215 CC test/nvme/sgl/sgl.o 00:15:47.215 LINK aer 00:15:47.215 CXX test/cpp_headers/endian.o 00:15:47.215 LINK poller_perf 00:15:47.215 CXX test/cpp_headers/env_dpdk.o 00:15:47.473 CXX test/cpp_headers/env.o 00:15:47.473 LINK reset 00:15:47.473 CC test/nvme/e2edp/nvme_dp.o 00:15:47.473 LINK sgl 00:15:47.473 CC test/nvme/overhead/overhead.o 00:15:47.473 CXX test/cpp_headers/event.o 00:15:47.473 LINK memory_ut 00:15:47.473 CC test/nvme/startup/startup.o 00:15:47.473 CC test/nvme/err_injection/err_injection.o 00:15:47.731 CC test/nvme/reserve/reserve.o 00:15:47.731 LINK nvme_dp 00:15:47.731 LINK startup 00:15:47.731 CXX test/cpp_headers/fd_group.o 00:15:47.731 CC test/nvme/simple_copy/simple_copy.o 00:15:47.731 LINK err_injection 00:15:47.731 LINK overhead 00:15:47.989 LINK reserve 00:15:47.989 CXX test/cpp_headers/fd.o 00:15:47.989 CC test/nvme/connect_stress/connect_stress.o 00:15:47.989 CXX test/cpp_headers/file.o 00:15:47.989 LINK simple_copy 00:15:48.247 CC test/nvme/boot_partition/boot_partition.o 00:15:48.247 CC test/nvme/compliance/nvme_compliance.o 00:15:48.247 CC test/nvme/fused_ordering/fused_ordering.o 00:15:48.247 LINK connect_stress 00:15:48.247 CC test/nvme/doorbell_aers/doorbell_aers.o 00:15:48.247 CXX test/cpp_headers/ftl.o 00:15:48.505 LINK boot_partition 00:15:48.505 CC test/nvme/fdp/fdp.o 00:15:48.505 CXX test/cpp_headers/gpt_spec.o 00:15:48.505 CC test/nvme/cuse/cuse.o 00:15:48.505 LINK fused_ordering 00:15:48.505 CXX test/cpp_headers/hexlify.o 00:15:48.764 LINK doorbell_aers 00:15:48.764 CXX test/cpp_headers/histogram_data.o 00:15:48.764 LINK nvme_compliance 00:15:48.764 CXX test/cpp_headers/idxd.o 00:15:48.764 CXX test/cpp_headers/idxd_spec.o 00:15:48.764 CXX test/cpp_headers/init.o 00:15:49.022 CXX test/cpp_headers/ioat.o 00:15:49.022 CXX test/cpp_headers/ioat_spec.o 00:15:49.022 LINK fdp 00:15:49.022 CXX test/cpp_headers/iscsi_spec.o 00:15:49.022 CXX test/cpp_headers/json.o 00:15:49.022 CXX test/cpp_headers/jsonrpc.o 00:15:49.022 CXX test/cpp_headers/keyring.o 00:15:49.022 CXX test/cpp_headers/keyring_module.o 00:15:49.022 CXX test/cpp_headers/likely.o 00:15:49.022 CXX test/cpp_headers/log.o 00:15:49.022 CXX test/cpp_headers/lvol.o 00:15:49.281 CXX test/cpp_headers/memory.o 00:15:49.281 CXX test/cpp_headers/mmio.o 00:15:49.281 CXX test/cpp_headers/nbd.o 00:15:49.281 CXX test/cpp_headers/notify.o 00:15:49.281 CXX test/cpp_headers/nvme.o 00:15:49.281 CXX test/cpp_headers/nvme_intel.o 00:15:49.281 CXX test/cpp_headers/nvme_ocssd.o 00:15:49.281 CXX test/cpp_headers/nvme_ocssd_spec.o 00:15:49.281 CXX test/cpp_headers/nvme_spec.o 00:15:49.281 CXX test/cpp_headers/nvme_zns.o 00:15:49.538 CXX test/cpp_headers/nvmf_cmd.o 00:15:49.538 CXX test/cpp_headers/nvmf_fc_spec.o 00:15:49.538 CXX test/cpp_headers/nvmf.o 00:15:49.538 CXX test/cpp_headers/nvmf_spec.o 00:15:49.538 CXX test/cpp_headers/nvmf_transport.o 00:15:49.538 CXX test/cpp_headers/opal.o 00:15:49.797 CXX test/cpp_headers/opal_spec.o 00:15:49.797 CXX test/cpp_headers/pci_ids.o 00:15:49.797 LINK cuse 00:15:50.054 CXX test/cpp_headers/pipe.o 00:15:50.054 CXX test/cpp_headers/queue.o 00:15:50.054 CXX test/cpp_headers/reduce.o 00:15:50.054 CXX test/cpp_headers/rpc.o 00:15:50.054 CXX test/cpp_headers/scheduler.o 00:15:50.054 CXX test/cpp_headers/scsi.o 00:15:50.054 CXX test/cpp_headers/scsi_spec.o 00:15:50.054 CXX test/cpp_headers/sock.o 00:15:50.313 CXX test/cpp_headers/stdinc.o 00:15:50.313 CXX test/cpp_headers/string.o 00:15:50.313 CXX test/cpp_headers/thread.o 00:15:50.313 CXX test/cpp_headers/trace.o 00:15:50.313 CXX test/cpp_headers/trace_parser.o 00:15:50.313 CXX test/cpp_headers/tree.o 00:15:50.313 CXX test/cpp_headers/ublk.o 00:15:50.313 CXX test/cpp_headers/util.o 00:15:50.313 CXX test/cpp_headers/uuid.o 00:15:50.593 CXX test/cpp_headers/version.o 00:15:50.593 CXX test/cpp_headers/vfio_user_pci.o 00:15:50.593 CXX test/cpp_headers/vfio_user_spec.o 00:15:50.593 CXX test/cpp_headers/vhost.o 00:15:50.593 CXX test/cpp_headers/vmd.o 00:15:50.593 CXX test/cpp_headers/xor.o 00:15:50.593 CXX test/cpp_headers/zipf.o 00:15:51.968 LINK esnap 00:15:53.342 00:15:53.342 real 1m8.798s 00:15:53.342 user 7m4.470s 00:15:53.342 sys 1m42.989s 00:15:53.342 15:34:23 -- common/autotest_common.sh@1112 -- $ xtrace_disable 00:15:53.342 15:34:23 -- common/autotest_common.sh@10 -- $ set +x 00:15:53.342 ************************************ 00:15:53.342 END TEST make 00:15:53.342 ************************************ 00:15:53.342 15:34:23 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:15:53.342 15:34:23 -- pm/common@30 -- $ signal_monitor_resources TERM 00:15:53.342 15:34:23 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:15:53.342 15:34:23 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:15:53.342 15:34:23 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:15:53.342 15:34:23 -- pm/common@45 -- $ pid=5207 00:15:53.342 15:34:23 -- pm/common@52 -- $ sudo kill -TERM 5207 00:15:53.342 15:34:23 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:15:53.342 15:34:23 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:15:53.342 15:34:23 -- pm/common@45 -- $ pid=5208 00:15:53.342 15:34:23 -- pm/common@52 -- $ sudo kill -TERM 5208 00:15:53.342 15:34:23 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:15:53.342 15:34:23 -- nvmf/common.sh@7 -- # uname -s 00:15:53.342 15:34:23 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:53.342 15:34:23 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:53.342 15:34:23 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:53.342 15:34:23 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:53.342 15:34:23 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:53.342 15:34:23 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:53.342 15:34:23 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:53.342 15:34:23 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:53.342 15:34:23 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:53.342 15:34:23 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:53.342 15:34:23 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:15:53.342 15:34:23 -- nvmf/common.sh@18 -- # NVME_HOSTID=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:15:53.342 15:34:23 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:53.342 15:34:23 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:53.342 15:34:23 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:15:53.342 15:34:23 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:53.342 15:34:23 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:15:53.342 15:34:23 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:53.342 15:34:23 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:53.342 15:34:23 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:53.342 15:34:23 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.342 15:34:23 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.342 15:34:23 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.342 15:34:23 -- paths/export.sh@5 -- # export PATH 00:15:53.342 15:34:23 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:53.342 15:34:23 -- nvmf/common.sh@47 -- # : 0 00:15:53.342 15:34:23 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:53.342 15:34:23 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:53.342 15:34:23 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:53.342 15:34:23 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:53.342 15:34:23 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:53.342 15:34:23 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:53.342 15:34:23 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:53.342 15:34:23 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:53.342 15:34:23 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:15:53.342 15:34:23 -- spdk/autotest.sh@32 -- # uname -s 00:15:53.342 15:34:23 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:15:53.342 15:34:23 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:15:53.342 15:34:23 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:15:53.342 15:34:23 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:15:53.342 15:34:23 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:15:53.342 15:34:23 -- spdk/autotest.sh@44 -- # modprobe nbd 00:15:53.342 15:34:23 -- spdk/autotest.sh@46 -- # type -P udevadm 00:15:53.342 15:34:23 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:15:53.342 15:34:23 -- spdk/autotest.sh@48 -- # udevadm_pid=53999 00:15:53.342 15:34:23 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:15:53.342 15:34:23 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:15:53.342 15:34:23 -- pm/common@17 -- # local monitor 00:15:53.342 15:34:23 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:15:53.342 15:34:23 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=54000 00:15:53.342 15:34:23 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:15:53.342 15:34:23 -- pm/common@21 -- # date +%s 00:15:53.342 15:34:23 -- pm/common@23 -- # MONITOR_RESOURCES_PIDS["$monitor"]=54002 00:15:53.342 15:34:23 -- pm/common@26 -- # sleep 1 00:15:53.342 15:34:23 -- pm/common@21 -- # date +%s 00:15:53.342 15:34:23 -- pm/common@21 -- # sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1714145663 00:15:53.342 15:34:23 -- pm/common@21 -- # sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1714145663 00:15:53.599 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1714145663_collect-vmstat.pm.log 00:15:53.599 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1714145663_collect-cpu-load.pm.log 00:15:54.531 15:34:24 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:15:54.531 15:34:24 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:15:54.531 15:34:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:15:54.531 15:34:24 -- common/autotest_common.sh@10 -- # set +x 00:15:54.531 15:34:24 -- spdk/autotest.sh@59 -- # create_test_list 00:15:54.531 15:34:24 -- common/autotest_common.sh@734 -- # xtrace_disable 00:15:54.531 15:34:24 -- common/autotest_common.sh@10 -- # set +x 00:15:54.531 15:34:24 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:15:54.531 15:34:24 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:15:54.531 15:34:24 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:15:54.531 15:34:24 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:15:54.531 15:34:24 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:15:54.531 15:34:24 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:15:54.531 15:34:24 -- common/autotest_common.sh@1441 -- # uname 00:15:54.531 15:34:24 -- common/autotest_common.sh@1441 -- # '[' Linux = FreeBSD ']' 00:15:54.531 15:34:24 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:15:54.531 15:34:24 -- common/autotest_common.sh@1461 -- # uname 00:15:54.531 15:34:24 -- common/autotest_common.sh@1461 -- # [[ Linux = FreeBSD ]] 00:15:54.531 15:34:24 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:15:54.531 15:34:24 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:15:54.531 15:34:24 -- spdk/autotest.sh@72 -- # hash lcov 00:15:54.531 15:34:24 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:15:54.531 15:34:24 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:15:54.531 --rc lcov_branch_coverage=1 00:15:54.531 --rc lcov_function_coverage=1 00:15:54.531 --rc genhtml_branch_coverage=1 00:15:54.531 --rc genhtml_function_coverage=1 00:15:54.531 --rc genhtml_legend=1 00:15:54.531 --rc geninfo_all_blocks=1 00:15:54.531 ' 00:15:54.531 15:34:24 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:15:54.531 --rc lcov_branch_coverage=1 00:15:54.531 --rc lcov_function_coverage=1 00:15:54.531 --rc genhtml_branch_coverage=1 00:15:54.531 --rc genhtml_function_coverage=1 00:15:54.531 --rc genhtml_legend=1 00:15:54.531 --rc geninfo_all_blocks=1 00:15:54.531 ' 00:15:54.531 15:34:24 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:15:54.531 --rc lcov_branch_coverage=1 00:15:54.531 --rc lcov_function_coverage=1 00:15:54.531 --rc genhtml_branch_coverage=1 00:15:54.531 --rc genhtml_function_coverage=1 00:15:54.531 --rc genhtml_legend=1 00:15:54.531 --rc geninfo_all_blocks=1 00:15:54.531 --no-external' 00:15:54.531 15:34:24 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:15:54.531 --rc lcov_branch_coverage=1 00:15:54.531 --rc lcov_function_coverage=1 00:15:54.531 --rc genhtml_branch_coverage=1 00:15:54.531 --rc genhtml_function_coverage=1 00:15:54.531 --rc genhtml_legend=1 00:15:54.531 --rc geninfo_all_blocks=1 00:15:54.531 --no-external' 00:15:54.531 15:34:24 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:15:54.531 lcov: LCOV version 1.14 00:15:54.531 15:34:24 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:16:02.639 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno:no functions found 00:16:02.639 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_p2l_upgrade.gcno 00:16:02.639 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno:no functions found 00:16:02.639 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_band_upgrade.gcno 00:16:02.639 /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno:no functions found 00:16:02.639 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/ftl/upgrade/ftl_chunk_upgrade.gcno 00:16:09.199 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:16:09.199 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:16:21.399 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno:no functions found 00:16:21.399 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel.gcno 00:16:21.399 /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:16:21.399 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/accel_module.gcno 00:16:21.399 /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno:no functions found 00:16:21.399 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/assert.gcno 00:16:21.399 /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno:no functions found 00:16:21.399 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/barrier.gcno 00:16:21.399 /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno:no functions found 00:16:21.399 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/base64.gcno 00:16:21.399 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno:no functions found 00:16:21.399 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev.gcno 00:16:21.399 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:16:21.399 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_module.gcno 00:16:21.399 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:16:21.399 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bdev_zone.gcno 00:16:21.399 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:16:21.399 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_array.gcno 00:16:21.399 /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:16:21.399 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/bit_pool.gcno 00:16:21.399 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:16:21.399 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob_bdev.gcno 00:16:21.399 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:16:21.399 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs_bdev.gcno 00:16:21.399 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:16:21.399 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blobfs.gcno 00:16:21.399 /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno:no functions found 00:16:21.399 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/blob.gcno 00:16:21.399 /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno:no functions found 00:16:21.399 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/conf.gcno 00:16:21.399 /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno:no functions found 00:16:21.399 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/config.gcno 00:16:21.399 /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:16:21.399 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/cpuset.gcno 00:16:21.399 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno:no functions found 00:16:21.399 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc16.gcno 00:16:21.399 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno:no functions found 00:16:21.399 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc32.gcno 00:16:21.399 /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno:no functions found 00:16:21.399 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/crc64.gcno 00:16:21.399 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno:no functions found 00:16:21.399 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dif.gcno 00:16:21.400 /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno:no functions found 00:16:21.400 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/dma.gcno 00:16:21.400 /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno:no functions found 00:16:21.400 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/endian.gcno 00:16:21.400 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:16:21.400 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env_dpdk.gcno 00:16:21.400 /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno:no functions found 00:16:21.400 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/env.gcno 00:16:21.400 /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno:no functions found 00:16:21.400 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/event.gcno 00:16:21.400 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:16:21.400 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd_group.gcno 00:16:21.400 /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno:no functions found 00:16:21.400 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/fd.gcno 00:16:21.400 /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno:no functions found 00:16:21.400 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/file.gcno 00:16:21.400 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno:no functions found 00:16:21.400 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ftl.gcno 00:16:21.400 /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:16:21.400 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/gpt_spec.gcno 00:16:21.400 /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:16:21.400 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/hexlify.gcno 00:16:21.400 /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:16:21.400 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/histogram_data.gcno 00:16:21.400 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno:no functions found 00:16:21.400 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd.gcno 00:16:21.400 /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:16:21.400 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/idxd_spec.gcno 00:16:21.400 /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno:no functions found 00:16:21.400 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/init.gcno 00:16:21.400 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno:no functions found 00:16:21.400 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat.gcno 00:16:21.400 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:16:21.400 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ioat_spec.gcno 00:16:21.400 /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:16:21.400 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/iscsi_spec.gcno 00:16:21.400 /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno:no functions found 00:16:21.400 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/json.gcno 00:16:21.400 /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:16:21.400 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/jsonrpc.gcno 00:16:21.400 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno:no functions found 00:16:21.400 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring.gcno 00:16:21.400 /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:16:21.400 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/keyring_module.gcno 00:16:21.400 /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno:no functions found 00:16:21.400 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/likely.gcno 00:16:21.400 /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno:no functions found 00:16:21.400 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/lvol.gcno 00:16:21.400 /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno:no functions found 00:16:21.400 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/log.gcno 00:16:21.400 /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno:no functions found 00:16:21.400 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/memory.gcno 00:16:21.400 /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno:no functions found 00:16:21.400 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/mmio.gcno 00:16:21.400 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno:no functions found 00:16:21.400 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nbd.gcno 00:16:21.400 /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno:no functions found 00:16:21.400 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/notify.gcno 00:16:21.400 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno:no functions found 00:16:21.400 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme.gcno 00:16:21.400 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:16:21.400 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_intel.gcno 00:16:21.400 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:16:21.400 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd.gcno 00:16:21.400 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:16:21.400 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:16:21.400 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:16:21.400 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_spec.gcno 00:16:21.400 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:16:21.400 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvme_zns.gcno 00:16:21.400 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:16:21.400 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_cmd.gcno 00:16:21.400 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:16:21.400 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:16:21.400 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:16:21.400 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf.gcno 00:16:21.400 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:16:21.400 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_spec.gcno 00:16:21.400 /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:16:21.400 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/nvmf_transport.gcno 00:16:21.400 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno:no functions found 00:16:21.400 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal.gcno 00:16:21.400 /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:16:21.400 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/opal_spec.gcno 00:16:21.400 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:16:21.400 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pci_ids.gcno 00:16:21.400 /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno:no functions found 00:16:21.400 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/pipe.gcno 00:16:21.400 /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno:no functions found 00:16:21.400 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/queue.gcno 00:16:21.400 /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno:no functions found 00:16:21.400 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/rpc.gcno 00:16:21.400 /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno:no functions found 00:16:21.400 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/reduce.gcno 00:16:21.400 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:16:21.400 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scheduler.gcno 00:16:21.400 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:16:21.400 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi_spec.gcno 00:16:21.400 /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno:no functions found 00:16:21.400 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/scsi.gcno 00:16:21.400 /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno:no functions found 00:16:21.400 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/sock.gcno 00:16:21.400 /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:16:21.400 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/stdinc.gcno 00:16:21.400 /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno:no functions found 00:16:21.400 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/string.gcno 00:16:21.400 /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno:no functions found 00:16:21.400 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/thread.gcno 00:16:21.400 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno:no functions found 00:16:21.400 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace.gcno 00:16:21.400 /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:16:21.400 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/trace_parser.gcno 00:16:21.400 /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno:no functions found 00:16:21.400 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/tree.gcno 00:16:21.400 /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno:no functions found 00:16:21.400 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/ublk.gcno 00:16:21.400 /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno:no functions found 00:16:21.400 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/util.gcno 00:16:21.400 /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno:no functions found 00:16:21.401 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/uuid.gcno 00:16:21.401 /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno:no functions found 00:16:21.401 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/version.gcno 00:16:21.659 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:16:21.659 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_pci.gcno 00:16:21.659 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno:no functions found 00:16:21.659 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vhost.gcno 00:16:21.659 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:16:21.659 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vfio_user_spec.gcno 00:16:21.659 /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno:no functions found 00:16:21.659 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/vmd.gcno 00:16:21.659 /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno:no functions found 00:16:21.659 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/xor.gcno 00:16:21.659 /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno:no functions found 00:16:21.659 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/test/cpp_headers/zipf.gcno 00:16:24.952 15:34:55 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:16:24.952 15:34:55 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:24.952 15:34:55 -- common/autotest_common.sh@10 -- # set +x 00:16:24.952 15:34:55 -- spdk/autotest.sh@91 -- # rm -f 00:16:24.952 15:34:55 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:25.518 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:25.777 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:16:25.777 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:16:25.777 15:34:55 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:16:25.777 15:34:55 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:16:25.777 15:34:55 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:16:25.777 15:34:55 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:16:25.777 15:34:55 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:16:25.777 15:34:55 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:16:25.777 15:34:55 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:16:25.777 15:34:55 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:25.777 15:34:55 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:25.778 15:34:55 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:16:25.778 15:34:55 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:16:25.778 15:34:55 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:16:25.778 15:34:55 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:16:25.778 15:34:55 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:25.778 15:34:55 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:16:25.778 15:34:55 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:16:25.778 15:34:55 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:16:25.778 15:34:55 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:16:25.778 15:34:55 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:25.778 15:34:55 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:16:25.778 15:34:55 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:16:25.778 15:34:55 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:16:25.778 15:34:55 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:16:25.778 15:34:55 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:25.778 15:34:55 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:16:25.778 15:34:55 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:16:25.778 15:34:55 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:16:25.778 15:34:55 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:16:25.778 15:34:55 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:16:25.778 15:34:55 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:16:25.778 No valid GPT data, bailing 00:16:25.778 15:34:55 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:16:25.778 15:34:55 -- scripts/common.sh@391 -- # pt= 00:16:25.778 15:34:55 -- scripts/common.sh@392 -- # return 1 00:16:25.778 15:34:55 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:16:25.778 1+0 records in 00:16:25.778 1+0 records out 00:16:25.778 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00463385 s, 226 MB/s 00:16:25.778 15:34:55 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:16:25.778 15:34:55 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:16:25.778 15:34:55 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n1 00:16:25.778 15:34:55 -- scripts/common.sh@378 -- # local block=/dev/nvme1n1 pt 00:16:25.778 15:34:55 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:16:25.778 No valid GPT data, bailing 00:16:25.778 15:34:56 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:16:25.778 15:34:56 -- scripts/common.sh@391 -- # pt= 00:16:25.778 15:34:56 -- scripts/common.sh@392 -- # return 1 00:16:25.778 15:34:56 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:16:25.778 1+0 records in 00:16:25.778 1+0 records out 00:16:25.778 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00558455 s, 188 MB/s 00:16:25.778 15:34:56 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:16:25.778 15:34:56 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:16:25.778 15:34:56 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n2 00:16:25.778 15:34:56 -- scripts/common.sh@378 -- # local block=/dev/nvme1n2 pt 00:16:25.778 15:34:56 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:16:26.036 No valid GPT data, bailing 00:16:26.036 15:34:56 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:16:26.036 15:34:56 -- scripts/common.sh@391 -- # pt= 00:16:26.036 15:34:56 -- scripts/common.sh@392 -- # return 1 00:16:26.036 15:34:56 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:16:26.036 1+0 records in 00:16:26.036 1+0 records out 00:16:26.036 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00465037 s, 225 MB/s 00:16:26.036 15:34:56 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:16:26.036 15:34:56 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:16:26.036 15:34:56 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme1n3 00:16:26.036 15:34:56 -- scripts/common.sh@378 -- # local block=/dev/nvme1n3 pt 00:16:26.036 15:34:56 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:16:26.036 No valid GPT data, bailing 00:16:26.036 15:34:56 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:16:26.036 15:34:56 -- scripts/common.sh@391 -- # pt= 00:16:26.036 15:34:56 -- scripts/common.sh@392 -- # return 1 00:16:26.036 15:34:56 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:16:26.036 1+0 records in 00:16:26.037 1+0 records out 00:16:26.037 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0053232 s, 197 MB/s 00:16:26.037 15:34:56 -- spdk/autotest.sh@118 -- # sync 00:16:26.037 15:34:56 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:16:26.037 15:34:56 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:16:26.037 15:34:56 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:16:27.936 15:34:58 -- spdk/autotest.sh@124 -- # uname -s 00:16:27.936 15:34:58 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:16:27.936 15:34:58 -- spdk/autotest.sh@125 -- # run_test setup.sh /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:16:27.936 15:34:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:27.936 15:34:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:27.936 15:34:58 -- common/autotest_common.sh@10 -- # set +x 00:16:27.936 ************************************ 00:16:27.936 START TEST setup.sh 00:16:27.936 ************************************ 00:16:27.936 15:34:58 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/test-setup.sh 00:16:28.195 * Looking for test storage... 00:16:28.195 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:16:28.195 15:34:58 -- setup/test-setup.sh@10 -- # uname -s 00:16:28.195 15:34:58 -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:16:28.195 15:34:58 -- setup/test-setup.sh@12 -- # run_test acl /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:16:28.195 15:34:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:28.195 15:34:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:28.195 15:34:58 -- common/autotest_common.sh@10 -- # set +x 00:16:28.195 ************************************ 00:16:28.195 START TEST acl 00:16:28.195 ************************************ 00:16:28.195 15:34:58 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/acl.sh 00:16:28.195 * Looking for test storage... 00:16:28.195 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:16:28.195 15:34:58 -- setup/acl.sh@10 -- # get_zoned_devs 00:16:28.195 15:34:58 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:16:28.195 15:34:58 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:16:28.195 15:34:58 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:16:28.195 15:34:58 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:16:28.195 15:34:58 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:16:28.195 15:34:58 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:16:28.195 15:34:58 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:28.195 15:34:58 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:28.195 15:34:58 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:16:28.195 15:34:58 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:16:28.195 15:34:58 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:16:28.195 15:34:58 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:16:28.195 15:34:58 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:28.195 15:34:58 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:16:28.195 15:34:58 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n2 00:16:28.195 15:34:58 -- common/autotest_common.sh@1648 -- # local device=nvme1n2 00:16:28.195 15:34:58 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:16:28.195 15:34:58 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:28.195 15:34:58 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:16:28.195 15:34:58 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n3 00:16:28.195 15:34:58 -- common/autotest_common.sh@1648 -- # local device=nvme1n3 00:16:28.195 15:34:58 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:16:28.195 15:34:58 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:28.195 15:34:58 -- setup/acl.sh@12 -- # devs=() 00:16:28.195 15:34:58 -- setup/acl.sh@12 -- # declare -a devs 00:16:28.195 15:34:58 -- setup/acl.sh@13 -- # drivers=() 00:16:28.195 15:34:58 -- setup/acl.sh@13 -- # declare -A drivers 00:16:28.195 15:34:58 -- setup/acl.sh@51 -- # setup reset 00:16:28.195 15:34:58 -- setup/common.sh@9 -- # [[ reset == output ]] 00:16:28.195 15:34:58 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:29.175 15:34:59 -- setup/acl.sh@52 -- # collect_setup_devs 00:16:29.175 15:34:59 -- setup/acl.sh@16 -- # local dev driver 00:16:29.175 15:34:59 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:16:29.175 15:34:59 -- setup/acl.sh@15 -- # setup output status 00:16:29.175 15:34:59 -- setup/common.sh@9 -- # [[ output == output ]] 00:16:29.175 15:34:59 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:16:29.744 15:34:59 -- setup/acl.sh@19 -- # [[ (1af4 == *:*:*.* ]] 00:16:29.744 15:34:59 -- setup/acl.sh@19 -- # continue 00:16:29.744 15:34:59 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:16:29.744 Hugepages 00:16:29.744 node hugesize free / total 00:16:29.744 15:34:59 -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:16:29.744 15:34:59 -- setup/acl.sh@19 -- # continue 00:16:29.744 15:34:59 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:16:29.744 00:16:29.744 Type BDF Vendor Device NUMA Driver Device Block devices 00:16:29.744 15:34:59 -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:16:29.744 15:34:59 -- setup/acl.sh@19 -- # continue 00:16:29.744 15:34:59 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:16:29.744 15:34:59 -- setup/acl.sh@19 -- # [[ 0000:00:03.0 == *:*:*.* ]] 00:16:29.744 15:34:59 -- setup/acl.sh@20 -- # [[ virtio-pci == nvme ]] 00:16:29.744 15:34:59 -- setup/acl.sh@20 -- # continue 00:16:29.744 15:34:59 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:16:29.744 15:35:00 -- setup/acl.sh@19 -- # [[ 0000:00:10.0 == *:*:*.* ]] 00:16:29.744 15:35:00 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:16:29.744 15:35:00 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:16:29.744 15:35:00 -- setup/acl.sh@22 -- # devs+=("$dev") 00:16:29.744 15:35:00 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:16:29.744 15:35:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:16:30.002 15:35:00 -- setup/acl.sh@19 -- # [[ 0000:00:11.0 == *:*:*.* ]] 00:16:30.002 15:35:00 -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:16:30.002 15:35:00 -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:16:30.002 15:35:00 -- setup/acl.sh@22 -- # devs+=("$dev") 00:16:30.002 15:35:00 -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:16:30.002 15:35:00 -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:16:30.002 15:35:00 -- setup/acl.sh@24 -- # (( 2 > 0 )) 00:16:30.002 15:35:00 -- setup/acl.sh@54 -- # run_test denied denied 00:16:30.002 15:35:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:30.002 15:35:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:30.002 15:35:00 -- common/autotest_common.sh@10 -- # set +x 00:16:30.002 ************************************ 00:16:30.002 START TEST denied 00:16:30.002 ************************************ 00:16:30.002 15:35:00 -- common/autotest_common.sh@1111 -- # denied 00:16:30.002 15:35:00 -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:00:10.0' 00:16:30.002 15:35:00 -- setup/acl.sh@38 -- # setup output config 00:16:30.002 15:35:00 -- setup/common.sh@9 -- # [[ output == output ]] 00:16:30.002 15:35:00 -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:00:10.0' 00:16:30.002 15:35:00 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:16:30.936 0000:00:10.0 (1b36 0010): Skipping denied controller at 0000:00:10.0 00:16:30.936 15:35:01 -- setup/acl.sh@40 -- # verify 0000:00:10.0 00:16:30.936 15:35:01 -- setup/acl.sh@28 -- # local dev driver 00:16:30.937 15:35:01 -- setup/acl.sh@30 -- # for dev in "$@" 00:16:30.937 15:35:01 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:10.0 ]] 00:16:30.937 15:35:01 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:10.0/driver 00:16:30.937 15:35:01 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:16:30.937 15:35:01 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:16:30.937 15:35:01 -- setup/acl.sh@41 -- # setup reset 00:16:30.937 15:35:01 -- setup/common.sh@9 -- # [[ reset == output ]] 00:16:30.937 15:35:01 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:31.504 ************************************ 00:16:31.504 END TEST denied 00:16:31.504 ************************************ 00:16:31.504 00:16:31.504 real 0m1.503s 00:16:31.504 user 0m0.589s 00:16:31.504 sys 0m0.852s 00:16:31.504 15:35:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:31.504 15:35:01 -- common/autotest_common.sh@10 -- # set +x 00:16:31.504 15:35:01 -- setup/acl.sh@55 -- # run_test allowed allowed 00:16:31.504 15:35:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:31.504 15:35:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:31.504 15:35:01 -- common/autotest_common.sh@10 -- # set +x 00:16:31.762 ************************************ 00:16:31.762 START TEST allowed 00:16:31.762 ************************************ 00:16:31.762 15:35:01 -- common/autotest_common.sh@1111 -- # allowed 00:16:31.762 15:35:01 -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:00:10.0 00:16:31.762 15:35:01 -- setup/acl.sh@45 -- # setup output config 00:16:31.762 15:35:01 -- setup/acl.sh@46 -- # grep -E '0000:00:10.0 .*: nvme -> .*' 00:16:31.762 15:35:01 -- setup/common.sh@9 -- # [[ output == output ]] 00:16:31.762 15:35:01 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:16:32.699 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:32.699 15:35:02 -- setup/acl.sh@47 -- # verify 0000:00:11.0 00:16:32.699 15:35:02 -- setup/acl.sh@28 -- # local dev driver 00:16:32.699 15:35:02 -- setup/acl.sh@30 -- # for dev in "$@" 00:16:32.699 15:35:02 -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:00:11.0 ]] 00:16:32.699 15:35:02 -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:00:11.0/driver 00:16:32.699 15:35:02 -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:16:32.699 15:35:02 -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:16:32.699 15:35:02 -- setup/acl.sh@48 -- # setup reset 00:16:32.699 15:35:02 -- setup/common.sh@9 -- # [[ reset == output ]] 00:16:32.699 15:35:02 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:33.265 00:16:33.265 real 0m1.581s 00:16:33.265 user 0m0.662s 00:16:33.265 sys 0m0.901s 00:16:33.265 15:35:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:33.265 15:35:03 -- common/autotest_common.sh@10 -- # set +x 00:16:33.265 ************************************ 00:16:33.265 END TEST allowed 00:16:33.265 ************************************ 00:16:33.265 ************************************ 00:16:33.265 END TEST acl 00:16:33.265 ************************************ 00:16:33.265 00:16:33.265 real 0m5.090s 00:16:33.265 user 0m2.181s 00:16:33.265 sys 0m2.806s 00:16:33.265 15:35:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:33.265 15:35:03 -- common/autotest_common.sh@10 -- # set +x 00:16:33.265 15:35:03 -- setup/test-setup.sh@13 -- # run_test hugepages /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:16:33.265 15:35:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:33.265 15:35:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:33.265 15:35:03 -- common/autotest_common.sh@10 -- # set +x 00:16:33.523 ************************************ 00:16:33.523 START TEST hugepages 00:16:33.523 ************************************ 00:16:33.523 15:35:03 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/hugepages.sh 00:16:33.523 * Looking for test storage... 00:16:33.523 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:16:33.523 15:35:03 -- setup/hugepages.sh@10 -- # nodes_sys=() 00:16:33.523 15:35:03 -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:16:33.523 15:35:03 -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:16:33.523 15:35:03 -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:16:33.523 15:35:03 -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:16:33.523 15:35:03 -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:16:33.523 15:35:03 -- setup/common.sh@17 -- # local get=Hugepagesize 00:16:33.523 15:35:03 -- setup/common.sh@18 -- # local node= 00:16:33.524 15:35:03 -- setup/common.sh@19 -- # local var val 00:16:33.524 15:35:03 -- setup/common.sh@20 -- # local mem_f mem 00:16:33.524 15:35:03 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:33.524 15:35:03 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:33.524 15:35:03 -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:33.524 15:35:03 -- setup/common.sh@28 -- # mapfile -t mem 00:16:33.524 15:35:03 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:33.524 15:35:03 -- setup/common.sh@31 -- # IFS=': ' 00:16:33.524 15:35:03 -- setup/common.sh@31 -- # read -r var val _ 00:16:33.524 15:35:03 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 5460600 kB' 'MemAvailable: 7404048 kB' 'Buffers: 2436 kB' 'Cached: 2153256 kB' 'SwapCached: 0 kB' 'Active: 875756 kB' 'Inactive: 1386020 kB' 'Active(anon): 116572 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1386020 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 684 kB' 'Writeback: 0 kB' 'AnonPages: 107744 kB' 'Mapped: 48760 kB' 'Shmem: 10488 kB' 'KReclaimable: 70368 kB' 'Slab: 145148 kB' 'SReclaimable: 70368 kB' 'SUnreclaim: 74780 kB' 'KernelStack: 6688 kB' 'PageTables: 4548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 12412440 kB' 'Committed_AS: 339412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54964 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:16:33.524 15:35:03 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:33.524 15:35:03 -- setup/common.sh@32 -- # continue 00:16:33.524 15:35:03 -- setup/common.sh@31 -- # IFS=': ' 00:16:33.524 15:35:03 -- setup/common.sh@31 -- # read -r var val _ 00:16:33.524 15:35:03 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:33.524 15:35:03 -- setup/common.sh@32 -- # continue 00:16:33.524 15:35:03 -- setup/common.sh@31 -- # IFS=': ' 00:16:33.524 15:35:03 -- setup/common.sh@31 -- # read -r var val _ 00:16:33.524 15:35:03 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:33.524 15:35:03 -- setup/common.sh@32 -- # continue 00:16:33.524 15:35:03 -- setup/common.sh@31 -- # IFS=': ' 00:16:33.524 15:35:03 -- setup/common.sh@31 -- # read -r var val _ 00:16:33.524 15:35:03 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:33.524 15:35:03 -- setup/common.sh@32 -- # continue 00:16:33.524 15:35:03 -- setup/common.sh@31 -- # IFS=': ' 00:16:33.524 15:35:03 -- setup/common.sh@31 -- # read -r var val _ 00:16:33.524 15:35:03 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:33.524 15:35:03 -- setup/common.sh@32 -- # continue 00:16:33.524 15:35:03 -- setup/common.sh@31 -- # IFS=': ' 00:16:33.524 15:35:03 -- setup/common.sh@31 -- # read -r var val _ 00:16:33.524 15:35:03 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:33.524 15:35:03 -- setup/common.sh@32 -- # continue 00:16:33.524 15:35:03 -- setup/common.sh@31 -- # IFS=': ' 00:16:33.524 15:35:03 -- setup/common.sh@31 -- # read -r var val _ 00:16:33.524 15:35:03 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:33.524 15:35:03 -- setup/common.sh@32 -- # continue 00:16:33.524 15:35:03 -- setup/common.sh@31 -- # IFS=': ' 00:16:33.524 15:35:03 -- setup/common.sh@31 -- # read -r var val _ 00:16:33.524 15:35:03 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:33.524 15:35:03 -- setup/common.sh@32 -- # continue 00:16:33.524 15:35:03 -- setup/common.sh@31 -- # IFS=': ' 00:16:33.524 15:35:03 -- setup/common.sh@31 -- # read -r var val _ 00:16:33.524 15:35:03 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:33.524 15:35:03 -- setup/common.sh@32 -- # continue 00:16:33.524 15:35:03 -- setup/common.sh@31 -- # IFS=': ' 00:16:33.524 15:35:03 -- setup/common.sh@31 -- # read -r var val _ 00:16:33.524 15:35:03 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:33.524 15:35:03 -- setup/common.sh@32 -- # continue 00:16:33.524 15:35:03 -- setup/common.sh@31 -- # IFS=': ' 00:16:33.524 15:35:03 -- setup/common.sh@31 -- # read -r var val _ 00:16:33.524 15:35:03 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:33.524 15:35:03 -- setup/common.sh@32 -- # continue 00:16:33.524 15:35:03 -- setup/common.sh@31 -- # IFS=': ' 00:16:33.524 15:35:03 -- setup/common.sh@31 -- # read -r var val _ 00:16:33.524 15:35:03 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:33.524 15:35:03 -- setup/common.sh@32 -- # continue 00:16:33.524 15:35:03 -- setup/common.sh@31 -- # IFS=': ' 00:16:33.524 15:35:03 -- setup/common.sh@31 -- # read -r var val _ 00:16:33.524 15:35:03 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:33.524 15:35:03 -- setup/common.sh@32 -- # continue 00:16:33.524 15:35:03 -- setup/common.sh@31 -- # IFS=': ' 00:16:33.524 15:35:03 -- setup/common.sh@31 -- # read -r var val _ 00:16:33.524 15:35:03 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:33.524 15:35:03 -- setup/common.sh@32 -- # continue 00:16:33.524 15:35:03 -- setup/common.sh@31 -- # IFS=': ' 00:16:33.524 15:35:03 -- setup/common.sh@31 -- # read -r var val _ 00:16:33.524 15:35:03 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:33.524 15:35:03 -- setup/common.sh@32 -- # continue 00:16:33.524 15:35:03 -- setup/common.sh@31 -- # IFS=': ' 00:16:33.524 15:35:03 -- setup/common.sh@31 -- # read -r var val _ 00:16:33.524 15:35:03 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:33.524 15:35:03 -- setup/common.sh@32 -- # continue 00:16:33.524 15:35:03 -- setup/common.sh@31 -- # IFS=': ' 00:16:33.524 15:35:03 -- setup/common.sh@31 -- # read -r var val _ 00:16:33.524 15:35:03 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:33.524 15:35:03 -- setup/common.sh@32 -- # continue 00:16:33.524 15:35:03 -- setup/common.sh@31 -- # IFS=': ' 00:16:33.524 15:35:03 -- setup/common.sh@31 -- # read -r var val _ 00:16:33.524 15:35:03 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:33.524 15:35:03 -- setup/common.sh@32 -- # continue 00:16:33.524 15:35:03 -- setup/common.sh@31 -- # IFS=': ' 00:16:33.524 15:35:03 -- setup/common.sh@31 -- # read -r var val _ 00:16:33.524 15:35:03 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:33.524 15:35:03 -- setup/common.sh@32 -- # continue 00:16:33.524 15:35:03 -- setup/common.sh@31 -- # IFS=': ' 00:16:33.524 15:35:03 -- setup/common.sh@31 -- # read -r var val _ 00:16:33.524 15:35:03 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:33.524 15:35:03 -- setup/common.sh@32 -- # continue 00:16:33.524 15:35:03 -- setup/common.sh@31 -- # IFS=': ' 00:16:33.524 15:35:03 -- setup/common.sh@31 -- # read -r var val _ 00:16:33.524 15:35:03 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:33.524 15:35:03 -- setup/common.sh@32 -- # continue 00:16:33.524 15:35:03 -- setup/common.sh@31 -- # IFS=': ' 00:16:33.524 15:35:03 -- setup/common.sh@31 -- # read -r var val _ 00:16:33.524 15:35:03 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:33.524 15:35:03 -- setup/common.sh@32 -- # continue 00:16:33.524 15:35:03 -- setup/common.sh@31 -- # IFS=': ' 00:16:33.524 15:35:03 -- setup/common.sh@31 -- # read -r var val _ 00:16:33.524 15:35:03 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:33.524 15:35:03 -- setup/common.sh@32 -- # continue 00:16:33.524 15:35:03 -- setup/common.sh@31 -- # IFS=': ' 00:16:33.524 15:35:03 -- setup/common.sh@31 -- # read -r var val _ 00:16:33.524 15:35:03 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:33.524 15:35:03 -- setup/common.sh@32 -- # continue 00:16:33.524 15:35:03 -- setup/common.sh@31 -- # IFS=': ' 00:16:33.524 15:35:03 -- setup/common.sh@31 -- # read -r var val _ 00:16:33.524 15:35:03 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:33.524 15:35:03 -- setup/common.sh@32 -- # continue 00:16:33.524 15:35:03 -- setup/common.sh@31 -- # IFS=': ' 00:16:33.524 15:35:03 -- setup/common.sh@31 -- # read -r var val _ 00:16:33.524 15:35:03 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:33.524 15:35:03 -- setup/common.sh@32 -- # continue 00:16:33.524 15:35:03 -- setup/common.sh@31 -- # IFS=': ' 00:16:33.524 15:35:03 -- setup/common.sh@31 -- # read -r var val _ 00:16:33.524 15:35:03 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:33.524 15:35:03 -- setup/common.sh@32 -- # continue 00:16:33.524 15:35:03 -- setup/common.sh@31 -- # IFS=': ' 00:16:33.524 15:35:03 -- setup/common.sh@31 -- # read -r var val _ 00:16:33.524 15:35:03 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:33.524 15:35:03 -- setup/common.sh@32 -- # continue 00:16:33.524 15:35:03 -- setup/common.sh@31 -- # IFS=': ' 00:16:33.524 15:35:03 -- setup/common.sh@31 -- # read -r var val _ 00:16:33.524 15:35:03 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:33.524 15:35:03 -- setup/common.sh@32 -- # continue 00:16:33.524 15:35:03 -- setup/common.sh@31 -- # IFS=': ' 00:16:33.524 15:35:03 -- setup/common.sh@31 -- # read -r var val _ 00:16:33.524 15:35:03 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:33.524 15:35:03 -- setup/common.sh@32 -- # continue 00:16:33.524 15:35:03 -- setup/common.sh@31 -- # IFS=': ' 00:16:33.524 15:35:03 -- setup/common.sh@31 -- # read -r var val _ 00:16:33.524 15:35:03 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:33.524 15:35:03 -- setup/common.sh@32 -- # continue 00:16:33.524 15:35:03 -- setup/common.sh@31 -- # IFS=': ' 00:16:33.524 15:35:03 -- setup/common.sh@31 -- # read -r var val _ 00:16:33.524 15:35:03 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:33.524 15:35:03 -- setup/common.sh@32 -- # continue 00:16:33.524 15:35:03 -- setup/common.sh@31 -- # IFS=': ' 00:16:33.524 15:35:03 -- setup/common.sh@31 -- # read -r var val _ 00:16:33.524 15:35:03 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:33.524 15:35:03 -- setup/common.sh@32 -- # continue 00:16:33.524 15:35:03 -- setup/common.sh@31 -- # IFS=': ' 00:16:33.524 15:35:03 -- setup/common.sh@31 -- # read -r var val _ 00:16:33.524 15:35:03 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:33.524 15:35:03 -- setup/common.sh@32 -- # continue 00:16:33.524 15:35:03 -- setup/common.sh@31 -- # IFS=': ' 00:16:33.524 15:35:03 -- setup/common.sh@31 -- # read -r var val _ 00:16:33.524 15:35:03 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:33.525 15:35:03 -- setup/common.sh@32 -- # continue 00:16:33.525 15:35:03 -- setup/common.sh@31 -- # IFS=': ' 00:16:33.525 15:35:03 -- setup/common.sh@31 -- # read -r var val _ 00:16:33.525 15:35:03 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:33.525 15:35:03 -- setup/common.sh@32 -- # continue 00:16:33.525 15:35:03 -- setup/common.sh@31 -- # IFS=': ' 00:16:33.525 15:35:03 -- setup/common.sh@31 -- # read -r var val _ 00:16:33.525 15:35:03 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:33.525 15:35:03 -- setup/common.sh@32 -- # continue 00:16:33.525 15:35:03 -- setup/common.sh@31 -- # IFS=': ' 00:16:33.525 15:35:03 -- setup/common.sh@31 -- # read -r var val _ 00:16:33.525 15:35:03 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:33.525 15:35:03 -- setup/common.sh@32 -- # continue 00:16:33.525 15:35:03 -- setup/common.sh@31 -- # IFS=': ' 00:16:33.525 15:35:03 -- setup/common.sh@31 -- # read -r var val _ 00:16:33.525 15:35:03 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:33.525 15:35:03 -- setup/common.sh@32 -- # continue 00:16:33.525 15:35:03 -- setup/common.sh@31 -- # IFS=': ' 00:16:33.525 15:35:03 -- setup/common.sh@31 -- # read -r var val _ 00:16:33.525 15:35:03 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:33.525 15:35:03 -- setup/common.sh@32 -- # continue 00:16:33.525 15:35:03 -- setup/common.sh@31 -- # IFS=': ' 00:16:33.525 15:35:03 -- setup/common.sh@31 -- # read -r var val _ 00:16:33.525 15:35:03 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:33.525 15:35:03 -- setup/common.sh@32 -- # continue 00:16:33.525 15:35:03 -- setup/common.sh@31 -- # IFS=': ' 00:16:33.525 15:35:03 -- setup/common.sh@31 -- # read -r var val _ 00:16:33.525 15:35:03 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:33.525 15:35:03 -- setup/common.sh@32 -- # continue 00:16:33.525 15:35:03 -- setup/common.sh@31 -- # IFS=': ' 00:16:33.525 15:35:03 -- setup/common.sh@31 -- # read -r var val _ 00:16:33.525 15:35:03 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:33.525 15:35:03 -- setup/common.sh@32 -- # continue 00:16:33.525 15:35:03 -- setup/common.sh@31 -- # IFS=': ' 00:16:33.525 15:35:03 -- setup/common.sh@31 -- # read -r var val _ 00:16:33.525 15:35:03 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:33.525 15:35:03 -- setup/common.sh@32 -- # continue 00:16:33.525 15:35:03 -- setup/common.sh@31 -- # IFS=': ' 00:16:33.525 15:35:03 -- setup/common.sh@31 -- # read -r var val _ 00:16:33.525 15:35:03 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:33.525 15:35:03 -- setup/common.sh@32 -- # continue 00:16:33.525 15:35:03 -- setup/common.sh@31 -- # IFS=': ' 00:16:33.525 15:35:03 -- setup/common.sh@31 -- # read -r var val _ 00:16:33.525 15:35:03 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:33.525 15:35:03 -- setup/common.sh@32 -- # continue 00:16:33.525 15:35:03 -- setup/common.sh@31 -- # IFS=': ' 00:16:33.525 15:35:03 -- setup/common.sh@31 -- # read -r var val _ 00:16:33.525 15:35:03 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:33.525 15:35:03 -- setup/common.sh@32 -- # continue 00:16:33.525 15:35:03 -- setup/common.sh@31 -- # IFS=': ' 00:16:33.525 15:35:03 -- setup/common.sh@31 -- # read -r var val _ 00:16:33.525 15:35:03 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:33.525 15:35:03 -- setup/common.sh@32 -- # continue 00:16:33.525 15:35:03 -- setup/common.sh@31 -- # IFS=': ' 00:16:33.525 15:35:03 -- setup/common.sh@31 -- # read -r var val _ 00:16:33.525 15:35:03 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:33.525 15:35:03 -- setup/common.sh@32 -- # continue 00:16:33.525 15:35:03 -- setup/common.sh@31 -- # IFS=': ' 00:16:33.525 15:35:03 -- setup/common.sh@31 -- # read -r var val _ 00:16:33.525 15:35:03 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:33.525 15:35:03 -- setup/common.sh@32 -- # continue 00:16:33.525 15:35:03 -- setup/common.sh@31 -- # IFS=': ' 00:16:33.525 15:35:03 -- setup/common.sh@31 -- # read -r var val _ 00:16:33.525 15:35:03 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:33.525 15:35:03 -- setup/common.sh@32 -- # continue 00:16:33.525 15:35:03 -- setup/common.sh@31 -- # IFS=': ' 00:16:33.525 15:35:03 -- setup/common.sh@31 -- # read -r var val _ 00:16:33.525 15:35:03 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:33.525 15:35:03 -- setup/common.sh@32 -- # continue 00:16:33.525 15:35:03 -- setup/common.sh@31 -- # IFS=': ' 00:16:33.525 15:35:03 -- setup/common.sh@31 -- # read -r var val _ 00:16:33.525 15:35:03 -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:16:33.525 15:35:03 -- setup/common.sh@33 -- # echo 2048 00:16:33.525 15:35:03 -- setup/common.sh@33 -- # return 0 00:16:33.525 15:35:03 -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:16:33.525 15:35:03 -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:16:33.525 15:35:03 -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:16:33.525 15:35:03 -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:16:33.525 15:35:03 -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:16:33.525 15:35:03 -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:16:33.525 15:35:03 -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:16:33.525 15:35:03 -- setup/hugepages.sh@207 -- # get_nodes 00:16:33.525 15:35:03 -- setup/hugepages.sh@27 -- # local node 00:16:33.525 15:35:03 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:16:33.525 15:35:03 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:16:33.525 15:35:03 -- setup/hugepages.sh@32 -- # no_nodes=1 00:16:33.525 15:35:03 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:16:33.525 15:35:03 -- setup/hugepages.sh@208 -- # clear_hp 00:16:33.525 15:35:03 -- setup/hugepages.sh@37 -- # local node hp 00:16:33.525 15:35:03 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:16:33.525 15:35:03 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:16:33.525 15:35:03 -- setup/hugepages.sh@41 -- # echo 0 00:16:33.525 15:35:03 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:16:33.525 15:35:03 -- setup/hugepages.sh@41 -- # echo 0 00:16:33.525 15:35:03 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:16:33.525 15:35:03 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:16:33.525 15:35:03 -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:16:33.525 15:35:03 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:33.525 15:35:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:33.525 15:35:03 -- common/autotest_common.sh@10 -- # set +x 00:16:33.525 ************************************ 00:16:33.525 START TEST default_setup 00:16:33.525 ************************************ 00:16:33.525 15:35:03 -- common/autotest_common.sh@1111 -- # default_setup 00:16:33.525 15:35:03 -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:16:33.525 15:35:03 -- setup/hugepages.sh@49 -- # local size=2097152 00:16:33.525 15:35:03 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:16:33.525 15:35:03 -- setup/hugepages.sh@51 -- # shift 00:16:33.525 15:35:03 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:16:33.525 15:35:03 -- setup/hugepages.sh@52 -- # local node_ids 00:16:33.525 15:35:03 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:16:33.525 15:35:03 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:16:33.525 15:35:03 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:16:33.525 15:35:03 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:16:33.525 15:35:03 -- setup/hugepages.sh@62 -- # local user_nodes 00:16:33.525 15:35:03 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:16:33.525 15:35:03 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:16:33.525 15:35:03 -- setup/hugepages.sh@67 -- # nodes_test=() 00:16:33.525 15:35:03 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:16:33.525 15:35:03 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:16:33.525 15:35:03 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:16:33.525 15:35:03 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:16:33.525 15:35:03 -- setup/hugepages.sh@73 -- # return 0 00:16:33.525 15:35:03 -- setup/hugepages.sh@137 -- # setup output 00:16:33.525 15:35:03 -- setup/common.sh@9 -- # [[ output == output ]] 00:16:33.525 15:35:03 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:34.462 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:34.462 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:34.462 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:34.462 15:35:04 -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:16:34.462 15:35:04 -- setup/hugepages.sh@89 -- # local node 00:16:34.462 15:35:04 -- setup/hugepages.sh@90 -- # local sorted_t 00:16:34.462 15:35:04 -- setup/hugepages.sh@91 -- # local sorted_s 00:16:34.462 15:35:04 -- setup/hugepages.sh@92 -- # local surp 00:16:34.462 15:35:04 -- setup/hugepages.sh@93 -- # local resv 00:16:34.462 15:35:04 -- setup/hugepages.sh@94 -- # local anon 00:16:34.462 15:35:04 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:16:34.462 15:35:04 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:16:34.462 15:35:04 -- setup/common.sh@17 -- # local get=AnonHugePages 00:16:34.462 15:35:04 -- setup/common.sh@18 -- # local node= 00:16:34.462 15:35:04 -- setup/common.sh@19 -- # local var val 00:16:34.462 15:35:04 -- setup/common.sh@20 -- # local mem_f mem 00:16:34.462 15:35:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:34.462 15:35:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:34.462 15:35:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:34.462 15:35:04 -- setup/common.sh@28 -- # mapfile -t mem 00:16:34.462 15:35:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:34.462 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.462 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.462 15:35:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7543276 kB' 'MemAvailable: 9486544 kB' 'Buffers: 2436 kB' 'Cached: 2153252 kB' 'SwapCached: 0 kB' 'Active: 892644 kB' 'Inactive: 1386028 kB' 'Active(anon): 133460 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1386028 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 860 kB' 'Writeback: 0 kB' 'AnonPages: 124324 kB' 'Mapped: 48876 kB' 'Shmem: 10464 kB' 'KReclaimable: 69992 kB' 'Slab: 144680 kB' 'SReclaimable: 69992 kB' 'SUnreclaim: 74688 kB' 'KernelStack: 6672 kB' 'PageTables: 4620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 356060 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54996 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:16:34.462 15:35:04 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:34.462 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.463 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.463 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.463 15:35:04 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:34.463 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.463 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.463 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.463 15:35:04 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:34.463 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.463 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.463 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.463 15:35:04 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:34.463 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.463 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.463 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.463 15:35:04 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:34.463 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.463 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.463 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.463 15:35:04 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:34.463 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.463 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.463 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.463 15:35:04 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:34.463 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.463 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.463 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.463 15:35:04 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:34.463 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.463 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.463 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.463 15:35:04 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:34.463 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.463 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.463 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.463 15:35:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:34.463 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.463 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.463 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.463 15:35:04 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:34.463 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.463 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.463 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.463 15:35:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:34.463 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.463 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.463 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.463 15:35:04 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:34.463 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.463 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.463 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.463 15:35:04 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:34.463 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.463 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.463 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.463 15:35:04 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:34.463 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.463 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.463 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.463 15:35:04 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:34.463 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.463 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.463 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.463 15:35:04 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:34.463 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.463 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.463 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.463 15:35:04 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:34.463 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.463 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.463 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.463 15:35:04 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:34.463 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.463 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.463 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.463 15:35:04 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:34.463 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.463 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.463 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.463 15:35:04 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:34.463 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.463 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.463 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.463 15:35:04 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:34.463 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.463 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.463 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.463 15:35:04 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:34.463 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.463 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.463 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.463 15:35:04 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:34.463 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.463 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.463 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.463 15:35:04 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:34.463 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.463 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.463 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.463 15:35:04 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:34.463 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.463 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.463 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.463 15:35:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:34.463 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.463 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.463 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.463 15:35:04 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:34.463 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.463 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.463 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.463 15:35:04 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:34.463 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.463 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.463 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.463 15:35:04 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:34.463 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.463 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.463 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.463 15:35:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:34.463 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.463 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.463 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.463 15:35:04 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:34.463 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.464 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.464 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.464 15:35:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:34.464 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.464 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.464 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.464 15:35:04 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:34.464 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.464 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.464 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.464 15:35:04 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:34.464 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.464 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.464 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.464 15:35:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:34.464 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.464 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.464 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.464 15:35:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:34.464 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.464 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.464 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.464 15:35:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:34.464 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.464 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.464 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.464 15:35:04 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:34.464 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.464 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.464 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.464 15:35:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:34.464 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.464 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.464 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.464 15:35:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:34.464 15:35:04 -- setup/common.sh@33 -- # echo 0 00:16:34.464 15:35:04 -- setup/common.sh@33 -- # return 0 00:16:34.464 15:35:04 -- setup/hugepages.sh@97 -- # anon=0 00:16:34.464 15:35:04 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:16:34.464 15:35:04 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:16:34.464 15:35:04 -- setup/common.sh@18 -- # local node= 00:16:34.464 15:35:04 -- setup/common.sh@19 -- # local var val 00:16:34.464 15:35:04 -- setup/common.sh@20 -- # local mem_f mem 00:16:34.464 15:35:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:34.464 15:35:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:34.464 15:35:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:34.464 15:35:04 -- setup/common.sh@28 -- # mapfile -t mem 00:16:34.464 15:35:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:34.464 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.464 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.464 15:35:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7543028 kB' 'MemAvailable: 9486296 kB' 'Buffers: 2436 kB' 'Cached: 2153252 kB' 'SwapCached: 0 kB' 'Active: 892116 kB' 'Inactive: 1386028 kB' 'Active(anon): 132932 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1386028 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 860 kB' 'Writeback: 0 kB' 'AnonPages: 124084 kB' 'Mapped: 48808 kB' 'Shmem: 10464 kB' 'KReclaimable: 69992 kB' 'Slab: 144676 kB' 'SReclaimable: 69992 kB' 'SUnreclaim: 74684 kB' 'KernelStack: 6624 kB' 'PageTables: 4460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 356060 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54980 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:16:34.464 15:35:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.464 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.464 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.464 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.464 15:35:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.464 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.464 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.464 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.464 15:35:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.464 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.464 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.464 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.464 15:35:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.464 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.464 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.464 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.464 15:35:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.464 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.464 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.464 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.464 15:35:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.464 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.464 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.464 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.464 15:35:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.464 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.464 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.464 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.464 15:35:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.464 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.464 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.464 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.464 15:35:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.464 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.464 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.464 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.464 15:35:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.464 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.464 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.464 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.464 15:35:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.464 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.464 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.464 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.464 15:35:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.464 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.464 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.464 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.464 15:35:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.464 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.464 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.464 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.464 15:35:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.464 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.464 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.464 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.464 15:35:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.464 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.464 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.465 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.465 15:35:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.465 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.465 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.465 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.465 15:35:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.465 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.465 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.465 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.465 15:35:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.465 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.465 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.465 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.465 15:35:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.465 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.465 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.465 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.465 15:35:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.465 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.465 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.465 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.465 15:35:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.465 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.465 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.465 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.465 15:35:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.465 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.465 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.465 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.465 15:35:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.465 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.465 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.465 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.465 15:35:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.465 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.465 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.465 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.465 15:35:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.465 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.465 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.465 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.465 15:35:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.465 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.465 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.465 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.465 15:35:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.465 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.465 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.465 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.465 15:35:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.465 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.465 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.465 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.465 15:35:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.465 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.465 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.465 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.465 15:35:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.465 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.465 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.465 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.465 15:35:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.465 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.465 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.465 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.465 15:35:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.465 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.465 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.465 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.465 15:35:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.465 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.465 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.465 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.465 15:35:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.465 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.465 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.465 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.465 15:35:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.465 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.465 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.465 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.465 15:35:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.465 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.465 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.465 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.465 15:35:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.465 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.465 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.465 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.465 15:35:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.465 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.465 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.465 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.465 15:35:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.465 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.465 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.465 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.465 15:35:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.465 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.465 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.465 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.465 15:35:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.465 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.465 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.465 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.465 15:35:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.465 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.465 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.465 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.465 15:35:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.465 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.465 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.465 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.465 15:35:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.465 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.465 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.465 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.465 15:35:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.465 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.465 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.465 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.465 15:35:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.465 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.465 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.465 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.465 15:35:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.465 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.465 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.465 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.465 15:35:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.465 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.465 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.465 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.465 15:35:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.466 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.466 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.466 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.466 15:35:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.466 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.466 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.466 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.466 15:35:04 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.466 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.466 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.466 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.466 15:35:04 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.466 15:35:04 -- setup/common.sh@33 -- # echo 0 00:16:34.466 15:35:04 -- setup/common.sh@33 -- # return 0 00:16:34.726 15:35:04 -- setup/hugepages.sh@99 -- # surp=0 00:16:34.726 15:35:04 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:16:34.726 15:35:04 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:16:34.726 15:35:04 -- setup/common.sh@18 -- # local node= 00:16:34.726 15:35:04 -- setup/common.sh@19 -- # local var val 00:16:34.726 15:35:04 -- setup/common.sh@20 -- # local mem_f mem 00:16:34.726 15:35:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:34.726 15:35:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:34.726 15:35:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:34.726 15:35:04 -- setup/common.sh@28 -- # mapfile -t mem 00:16:34.726 15:35:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:34.726 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.727 15:35:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7543028 kB' 'MemAvailable: 9486296 kB' 'Buffers: 2436 kB' 'Cached: 2153252 kB' 'SwapCached: 0 kB' 'Active: 892072 kB' 'Inactive: 1386028 kB' 'Active(anon): 132888 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1386028 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 860 kB' 'Writeback: 0 kB' 'AnonPages: 123996 kB' 'Mapped: 48808 kB' 'Shmem: 10464 kB' 'KReclaimable: 69992 kB' 'Slab: 144668 kB' 'SReclaimable: 69992 kB' 'SUnreclaim: 74676 kB' 'KernelStack: 6608 kB' 'PageTables: 4408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 356060 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54980 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.727 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.727 15:35:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:34.728 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.728 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.728 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.728 15:35:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:34.728 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.728 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.728 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.728 15:35:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:34.728 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.728 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.728 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.728 15:35:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:34.728 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.728 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.728 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.728 15:35:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:34.728 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.728 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.728 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.728 15:35:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:34.728 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.728 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.728 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.728 15:35:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:34.728 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.728 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.728 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.728 15:35:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:34.728 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.728 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.728 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.728 15:35:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:34.728 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.728 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.728 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.728 15:35:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:34.728 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.728 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.728 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.728 15:35:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:34.728 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.728 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.728 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.728 15:35:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:34.728 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.728 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.728 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.728 15:35:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:34.728 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.728 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.728 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.728 15:35:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:34.728 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.728 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.728 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.728 15:35:04 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:34.728 15:35:04 -- setup/common.sh@33 -- # echo 0 00:16:34.728 15:35:04 -- setup/common.sh@33 -- # return 0 00:16:34.728 15:35:04 -- setup/hugepages.sh@100 -- # resv=0 00:16:34.728 nr_hugepages=1024 00:16:34.728 15:35:04 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:16:34.728 resv_hugepages=0 00:16:34.728 surplus_hugepages=0 00:16:34.728 anon_hugepages=0 00:16:34.728 15:35:04 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:16:34.728 15:35:04 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:16:34.728 15:35:04 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:16:34.728 15:35:04 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:16:34.728 15:35:04 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:16:34.728 15:35:04 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:16:34.728 15:35:04 -- setup/common.sh@17 -- # local get=HugePages_Total 00:16:34.728 15:35:04 -- setup/common.sh@18 -- # local node= 00:16:34.728 15:35:04 -- setup/common.sh@19 -- # local var val 00:16:34.728 15:35:04 -- setup/common.sh@20 -- # local mem_f mem 00:16:34.728 15:35:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:34.728 15:35:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:34.728 15:35:04 -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:34.728 15:35:04 -- setup/common.sh@28 -- # mapfile -t mem 00:16:34.728 15:35:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:34.728 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.728 15:35:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7543028 kB' 'MemAvailable: 9486296 kB' 'Buffers: 2436 kB' 'Cached: 2153252 kB' 'SwapCached: 0 kB' 'Active: 892128 kB' 'Inactive: 1386028 kB' 'Active(anon): 132944 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1386028 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 860 kB' 'Writeback: 0 kB' 'AnonPages: 124080 kB' 'Mapped: 48808 kB' 'Shmem: 10464 kB' 'KReclaimable: 69992 kB' 'Slab: 144668 kB' 'SReclaimable: 69992 kB' 'SUnreclaim: 74676 kB' 'KernelStack: 6624 kB' 'PageTables: 4460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 356060 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54980 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:16:34.728 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.728 15:35:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:34.728 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.728 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.728 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.728 15:35:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:34.728 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.728 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.728 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.728 15:35:04 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:34.728 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.728 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.728 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.728 15:35:04 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:34.728 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.728 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.728 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.728 15:35:04 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:34.728 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.728 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.728 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.728 15:35:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:34.728 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.728 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.728 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.728 15:35:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:34.728 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.728 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.728 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.728 15:35:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:34.728 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.728 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.728 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.728 15:35:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:34.728 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.728 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.728 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.728 15:35:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:34.728 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.728 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.728 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.728 15:35:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:34.728 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.728 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.728 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.728 15:35:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:34.728 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.728 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.728 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.728 15:35:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:34.728 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.728 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.728 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.728 15:35:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:34.728 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.728 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.728 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.728 15:35:04 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:34.728 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.728 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.728 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.728 15:35:04 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:34.728 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.728 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.728 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.728 15:35:04 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:34.728 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.729 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.729 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.729 15:35:04 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:34.729 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.729 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.729 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.729 15:35:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:34.729 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.729 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.729 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.729 15:35:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:34.729 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.729 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.729 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.729 15:35:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:34.729 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.729 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.729 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.729 15:35:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:34.729 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.729 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.729 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.729 15:35:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:34.729 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.729 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.729 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.729 15:35:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:34.729 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.729 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.729 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.729 15:35:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:34.729 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.729 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.729 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.729 15:35:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:34.729 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.729 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.729 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.729 15:35:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:34.729 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.729 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.729 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.729 15:35:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:34.729 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.729 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.729 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.729 15:35:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:34.729 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.729 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.729 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.729 15:35:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:34.729 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.729 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.729 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.729 15:35:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:34.729 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.729 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.729 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.729 15:35:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:34.729 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.729 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.729 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.729 15:35:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:34.729 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.729 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.729 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.729 15:35:04 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:34.729 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.729 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.729 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.729 15:35:04 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:34.729 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.729 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.729 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.729 15:35:04 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:34.729 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.729 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.729 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.729 15:35:04 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:34.729 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.729 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.729 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.729 15:35:04 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:34.729 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.729 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.729 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.729 15:35:04 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:34.729 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.729 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.729 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.729 15:35:04 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:34.729 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.729 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.729 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.729 15:35:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:34.729 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.729 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.729 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.729 15:35:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:34.729 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.729 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.729 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.729 15:35:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:34.729 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.729 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.729 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.729 15:35:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:34.729 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.729 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.729 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.729 15:35:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:34.729 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.729 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.729 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.729 15:35:04 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:34.729 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.729 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.729 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.729 15:35:04 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:34.729 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.729 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.729 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.729 15:35:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:34.729 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.729 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.729 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.729 15:35:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:34.729 15:35:04 -- setup/common.sh@33 -- # echo 1024 00:16:34.729 15:35:04 -- setup/common.sh@33 -- # return 0 00:16:34.729 15:35:04 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:16:34.729 15:35:04 -- setup/hugepages.sh@112 -- # get_nodes 00:16:34.729 15:35:04 -- setup/hugepages.sh@27 -- # local node 00:16:34.729 15:35:04 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:16:34.729 15:35:04 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:16:34.729 15:35:04 -- setup/hugepages.sh@32 -- # no_nodes=1 00:16:34.729 15:35:04 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:16:34.729 15:35:04 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:16:34.729 15:35:04 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:16:34.729 15:35:04 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:16:34.729 15:35:04 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:16:34.729 15:35:04 -- setup/common.sh@18 -- # local node=0 00:16:34.729 15:35:04 -- setup/common.sh@19 -- # local var val 00:16:34.729 15:35:04 -- setup/common.sh@20 -- # local mem_f mem 00:16:34.729 15:35:04 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:34.729 15:35:04 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:16:34.729 15:35:04 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:16:34.729 15:35:04 -- setup/common.sh@28 -- # mapfile -t mem 00:16:34.729 15:35:04 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:34.729 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.729 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.729 15:35:04 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7543028 kB' 'MemUsed: 4698952 kB' 'SwapCached: 0 kB' 'Active: 892084 kB' 'Inactive: 1386028 kB' 'Active(anon): 132900 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1386028 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 860 kB' 'Writeback: 0 kB' 'FilePages: 2155688 kB' 'Mapped: 48808 kB' 'AnonPages: 124008 kB' 'Shmem: 10464 kB' 'KernelStack: 6608 kB' 'PageTables: 4408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 69992 kB' 'Slab: 144660 kB' 'SReclaimable: 69992 kB' 'SUnreclaim: 74668 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # continue 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # IFS=': ' 00:16:34.730 15:35:04 -- setup/common.sh@31 -- # read -r var val _ 00:16:34.730 15:35:04 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:34.730 15:35:04 -- setup/common.sh@33 -- # echo 0 00:16:34.730 15:35:04 -- setup/common.sh@33 -- # return 0 00:16:34.730 node0=1024 expecting 1024 00:16:34.730 15:35:04 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:16:34.730 15:35:04 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:16:34.730 15:35:04 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:16:34.730 15:35:04 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:16:34.730 15:35:04 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:16:34.730 15:35:04 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:16:34.730 00:16:34.730 real 0m1.053s 00:16:34.730 user 0m0.462s 00:16:34.730 sys 0m0.501s 00:16:34.730 ************************************ 00:16:34.730 END TEST default_setup 00:16:34.730 ************************************ 00:16:34.730 15:35:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:34.730 15:35:04 -- common/autotest_common.sh@10 -- # set +x 00:16:34.730 15:35:04 -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:16:34.731 15:35:04 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:34.731 15:35:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:34.731 15:35:04 -- common/autotest_common.sh@10 -- # set +x 00:16:34.731 ************************************ 00:16:34.731 START TEST per_node_1G_alloc 00:16:34.731 ************************************ 00:16:34.731 15:35:04 -- common/autotest_common.sh@1111 -- # per_node_1G_alloc 00:16:34.731 15:35:04 -- setup/hugepages.sh@143 -- # local IFS=, 00:16:34.731 15:35:04 -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 00:16:34.731 15:35:04 -- setup/hugepages.sh@49 -- # local size=1048576 00:16:34.731 15:35:04 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:16:34.731 15:35:04 -- setup/hugepages.sh@51 -- # shift 00:16:34.731 15:35:04 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:16:34.731 15:35:04 -- setup/hugepages.sh@52 -- # local node_ids 00:16:34.731 15:35:04 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:16:34.731 15:35:04 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:16:34.731 15:35:04 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:16:34.731 15:35:04 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:16:34.731 15:35:04 -- setup/hugepages.sh@62 -- # local user_nodes 00:16:34.731 15:35:04 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:16:34.731 15:35:04 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:16:34.731 15:35:04 -- setup/hugepages.sh@67 -- # nodes_test=() 00:16:34.731 15:35:04 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:16:34.731 15:35:04 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:16:34.731 15:35:04 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:16:34.731 15:35:04 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:16:34.731 15:35:04 -- setup/hugepages.sh@73 -- # return 0 00:16:34.731 15:35:04 -- setup/hugepages.sh@146 -- # NRHUGE=512 00:16:34.731 15:35:04 -- setup/hugepages.sh@146 -- # HUGENODE=0 00:16:34.731 15:35:04 -- setup/hugepages.sh@146 -- # setup output 00:16:34.731 15:35:04 -- setup/common.sh@9 -- # [[ output == output ]] 00:16:34.731 15:35:04 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:35.302 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:35.302 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:35.302 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:35.302 15:35:05 -- setup/hugepages.sh@147 -- # nr_hugepages=512 00:16:35.302 15:35:05 -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:16:35.302 15:35:05 -- setup/hugepages.sh@89 -- # local node 00:16:35.302 15:35:05 -- setup/hugepages.sh@90 -- # local sorted_t 00:16:35.302 15:35:05 -- setup/hugepages.sh@91 -- # local sorted_s 00:16:35.302 15:35:05 -- setup/hugepages.sh@92 -- # local surp 00:16:35.302 15:35:05 -- setup/hugepages.sh@93 -- # local resv 00:16:35.302 15:35:05 -- setup/hugepages.sh@94 -- # local anon 00:16:35.302 15:35:05 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:16:35.302 15:35:05 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:16:35.302 15:35:05 -- setup/common.sh@17 -- # local get=AnonHugePages 00:16:35.302 15:35:05 -- setup/common.sh@18 -- # local node= 00:16:35.302 15:35:05 -- setup/common.sh@19 -- # local var val 00:16:35.302 15:35:05 -- setup/common.sh@20 -- # local mem_f mem 00:16:35.302 15:35:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:35.302 15:35:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:35.302 15:35:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:35.302 15:35:05 -- setup/common.sh@28 -- # mapfile -t mem 00:16:35.302 15:35:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:35.302 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.302 15:35:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8605636 kB' 'MemAvailable: 10548920 kB' 'Buffers: 2436 kB' 'Cached: 2153256 kB' 'SwapCached: 0 kB' 'Active: 892104 kB' 'Inactive: 1386044 kB' 'Active(anon): 132920 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1386044 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1036 kB' 'Writeback: 0 kB' 'AnonPages: 124048 kB' 'Mapped: 48996 kB' 'Shmem: 10464 kB' 'KReclaimable: 69992 kB' 'Slab: 144656 kB' 'SReclaimable: 69992 kB' 'SUnreclaim: 74664 kB' 'KernelStack: 6596 kB' 'PageTables: 4448 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 355752 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54980 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:16:35.302 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.302 15:35:05 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.302 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.302 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.302 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.302 15:35:05 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.302 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.302 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.302 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.302 15:35:05 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.302 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.302 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.302 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.302 15:35:05 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.302 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.302 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.302 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.302 15:35:05 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.302 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.302 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.302 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.302 15:35:05 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.302 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.302 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.302 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.302 15:35:05 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.302 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.302 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.302 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.302 15:35:05 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.302 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.302 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.302 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.302 15:35:05 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.302 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.302 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.302 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.302 15:35:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.302 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.302 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.302 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.302 15:35:05 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.302 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.302 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.302 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.302 15:35:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.302 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.302 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.302 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.302 15:35:05 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.302 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.302 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.302 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.302 15:35:05 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.302 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.302 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.302 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.302 15:35:05 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.302 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.302 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.302 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.302 15:35:05 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.302 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.302 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.302 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.302 15:35:05 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.302 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.302 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.302 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.302 15:35:05 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.302 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.302 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.302 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.302 15:35:05 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.302 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.302 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.302 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.302 15:35:05 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.302 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.302 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.302 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.302 15:35:05 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.302 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.302 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.302 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.302 15:35:05 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.302 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.302 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.302 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.302 15:35:05 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.302 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.302 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.302 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.302 15:35:05 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.302 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.302 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.302 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.302 15:35:05 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.302 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.302 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.302 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.302 15:35:05 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.302 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.302 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.302 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.302 15:35:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.302 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.302 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.302 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.302 15:35:05 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.302 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.302 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.302 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.302 15:35:05 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.302 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.302 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.302 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.302 15:35:05 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.302 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.302 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.302 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.302 15:35:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.302 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.302 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.302 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.303 15:35:05 -- setup/common.sh@33 -- # echo 0 00:16:35.303 15:35:05 -- setup/common.sh@33 -- # return 0 00:16:35.303 15:35:05 -- setup/hugepages.sh@97 -- # anon=0 00:16:35.303 15:35:05 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:16:35.303 15:35:05 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:16:35.303 15:35:05 -- setup/common.sh@18 -- # local node= 00:16:35.303 15:35:05 -- setup/common.sh@19 -- # local var val 00:16:35.303 15:35:05 -- setup/common.sh@20 -- # local mem_f mem 00:16:35.303 15:35:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:35.303 15:35:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:35.303 15:35:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:35.303 15:35:05 -- setup/common.sh@28 -- # mapfile -t mem 00:16:35.303 15:35:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.303 15:35:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8605224 kB' 'MemAvailable: 10548508 kB' 'Buffers: 2436 kB' 'Cached: 2153256 kB' 'SwapCached: 0 kB' 'Active: 891880 kB' 'Inactive: 1386044 kB' 'Active(anon): 132696 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1386044 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1036 kB' 'Writeback: 0 kB' 'AnonPages: 124088 kB' 'Mapped: 48844 kB' 'Shmem: 10464 kB' 'KReclaimable: 69992 kB' 'Slab: 144648 kB' 'SReclaimable: 69992 kB' 'SUnreclaim: 74656 kB' 'KernelStack: 6608 kB' 'PageTables: 4408 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 355752 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54964 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.303 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.303 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.304 15:35:05 -- setup/common.sh@33 -- # echo 0 00:16:35.304 15:35:05 -- setup/common.sh@33 -- # return 0 00:16:35.304 15:35:05 -- setup/hugepages.sh@99 -- # surp=0 00:16:35.304 15:35:05 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:16:35.304 15:35:05 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:16:35.304 15:35:05 -- setup/common.sh@18 -- # local node= 00:16:35.304 15:35:05 -- setup/common.sh@19 -- # local var val 00:16:35.304 15:35:05 -- setup/common.sh@20 -- # local mem_f mem 00:16:35.304 15:35:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:35.304 15:35:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:35.304 15:35:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:35.304 15:35:05 -- setup/common.sh@28 -- # mapfile -t mem 00:16:35.304 15:35:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.304 15:35:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8607416 kB' 'MemAvailable: 10550700 kB' 'Buffers: 2436 kB' 'Cached: 2153256 kB' 'SwapCached: 0 kB' 'Active: 891880 kB' 'Inactive: 1386044 kB' 'Active(anon): 132696 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1386044 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1036 kB' 'Writeback: 0 kB' 'AnonPages: 123800 kB' 'Mapped: 48828 kB' 'Shmem: 10464 kB' 'KReclaimable: 69992 kB' 'Slab: 144640 kB' 'SReclaimable: 69992 kB' 'SUnreclaim: 74648 kB' 'KernelStack: 6608 kB' 'PageTables: 4404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 355752 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54964 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.304 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.304 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.305 15:35:05 -- setup/common.sh@33 -- # echo 0 00:16:35.305 15:35:05 -- setup/common.sh@33 -- # return 0 00:16:35.305 nr_hugepages=512 00:16:35.305 resv_hugepages=0 00:16:35.305 surplus_hugepages=0 00:16:35.305 anon_hugepages=0 00:16:35.305 15:35:05 -- setup/hugepages.sh@100 -- # resv=0 00:16:35.305 15:35:05 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:16:35.305 15:35:05 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:16:35.305 15:35:05 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:16:35.305 15:35:05 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:16:35.305 15:35:05 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:16:35.305 15:35:05 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:16:35.305 15:35:05 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:16:35.305 15:35:05 -- setup/common.sh@17 -- # local get=HugePages_Total 00:16:35.305 15:35:05 -- setup/common.sh@18 -- # local node= 00:16:35.305 15:35:05 -- setup/common.sh@19 -- # local var val 00:16:35.305 15:35:05 -- setup/common.sh@20 -- # local mem_f mem 00:16:35.305 15:35:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:35.305 15:35:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:35.305 15:35:05 -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:35.305 15:35:05 -- setup/common.sh@28 -- # mapfile -t mem 00:16:35.305 15:35:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.305 15:35:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8605920 kB' 'MemAvailable: 10549204 kB' 'Buffers: 2436 kB' 'Cached: 2153256 kB' 'SwapCached: 0 kB' 'Active: 892148 kB' 'Inactive: 1386044 kB' 'Active(anon): 132964 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1386044 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1036 kB' 'Writeback: 0 kB' 'AnonPages: 123860 kB' 'Mapped: 48948 kB' 'Shmem: 10464 kB' 'KReclaimable: 69992 kB' 'Slab: 144632 kB' 'SReclaimable: 69992 kB' 'SUnreclaim: 74640 kB' 'KernelStack: 6624 kB' 'PageTables: 4460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 358308 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54980 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.305 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:35.305 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.306 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.306 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.306 15:35:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:35.306 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.306 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.306 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.306 15:35:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:35.306 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.306 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.306 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.306 15:35:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:35.306 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.306 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.306 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.306 15:35:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:35.306 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.306 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.306 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.306 15:35:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:35.306 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.306 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.306 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.306 15:35:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:35.306 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.306 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.306 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.306 15:35:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:35.306 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.306 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.306 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.306 15:35:05 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:35.306 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.306 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.306 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.306 15:35:05 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:35.306 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.306 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.306 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.306 15:35:05 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:35.306 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.306 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.306 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.306 15:35:05 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:35.306 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.306 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.306 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.306 15:35:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:35.306 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.306 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.306 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.306 15:35:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:35.306 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.306 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.306 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.306 15:35:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:35.306 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.306 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.306 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.306 15:35:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:35.306 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.306 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.306 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.306 15:35:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:35.306 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.306 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.306 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.306 15:35:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:35.306 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.306 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.306 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.306 15:35:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:35.306 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.306 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.306 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.306 15:35:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:35.306 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.306 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.306 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.306 15:35:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:35.306 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.306 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.306 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.306 15:35:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:35.306 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.306 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.306 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.306 15:35:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:35.306 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.306 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.306 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.306 15:35:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:35.306 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.306 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.306 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.306 15:35:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:35.306 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.306 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.306 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.306 15:35:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:35.306 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.306 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.306 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.306 15:35:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:35.306 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.306 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.306 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.306 15:35:05 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:35.306 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.306 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.306 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.306 15:35:05 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:35.306 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.306 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.306 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.306 15:35:05 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:35.306 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.306 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.306 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.306 15:35:05 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:35.306 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.306 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.306 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:35.307 15:35:05 -- setup/common.sh@33 -- # echo 512 00:16:35.307 15:35:05 -- setup/common.sh@33 -- # return 0 00:16:35.307 15:35:05 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:16:35.307 15:35:05 -- setup/hugepages.sh@112 -- # get_nodes 00:16:35.307 15:35:05 -- setup/hugepages.sh@27 -- # local node 00:16:35.307 15:35:05 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:16:35.307 15:35:05 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:16:35.307 15:35:05 -- setup/hugepages.sh@32 -- # no_nodes=1 00:16:35.307 15:35:05 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:16:35.307 15:35:05 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:16:35.307 15:35:05 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:16:35.307 15:35:05 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:16:35.307 15:35:05 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:16:35.307 15:35:05 -- setup/common.sh@18 -- # local node=0 00:16:35.307 15:35:05 -- setup/common.sh@19 -- # local var val 00:16:35.307 15:35:05 -- setup/common.sh@20 -- # local mem_f mem 00:16:35.307 15:35:05 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:35.307 15:35:05 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:16:35.307 15:35:05 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:16:35.307 15:35:05 -- setup/common.sh@28 -- # mapfile -t mem 00:16:35.307 15:35:05 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.307 15:35:05 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8606260 kB' 'MemUsed: 3635720 kB' 'SwapCached: 0 kB' 'Active: 891868 kB' 'Inactive: 1386044 kB' 'Active(anon): 132684 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1386044 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1036 kB' 'Writeback: 0 kB' 'FilePages: 2155692 kB' 'Mapped: 48828 kB' 'AnonPages: 124080 kB' 'Shmem: 10464 kB' 'KernelStack: 6560 kB' 'PageTables: 4268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 69992 kB' 'Slab: 144632 kB' 'SReclaimable: 69992 kB' 'SUnreclaim: 74640 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.307 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.307 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.308 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.308 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.308 15:35:05 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.308 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.308 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.308 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.308 15:35:05 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.308 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.308 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.308 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.308 15:35:05 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.308 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.308 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.308 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.308 15:35:05 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.308 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.308 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.308 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.308 15:35:05 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.308 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.308 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.308 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.308 15:35:05 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.308 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.308 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.308 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.308 15:35:05 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.308 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.308 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.308 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.308 15:35:05 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.308 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.308 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.308 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.308 15:35:05 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.308 15:35:05 -- setup/common.sh@32 -- # continue 00:16:35.308 15:35:05 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.308 15:35:05 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.308 15:35:05 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.308 15:35:05 -- setup/common.sh@33 -- # echo 0 00:16:35.308 15:35:05 -- setup/common.sh@33 -- # return 0 00:16:35.308 15:35:05 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:16:35.308 15:35:05 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:16:35.308 node0=512 expecting 512 00:16:35.308 15:35:05 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:16:35.308 15:35:05 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:16:35.308 15:35:05 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:16:35.308 15:35:05 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:16:35.308 00:16:35.308 real 0m0.559s 00:16:35.308 user 0m0.282s 00:16:35.308 sys 0m0.288s 00:16:35.308 ************************************ 00:16:35.308 END TEST per_node_1G_alloc 00:16:35.308 ************************************ 00:16:35.308 15:35:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:35.308 15:35:05 -- common/autotest_common.sh@10 -- # set +x 00:16:35.308 15:35:05 -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:16:35.308 15:35:05 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:35.308 15:35:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:35.308 15:35:05 -- common/autotest_common.sh@10 -- # set +x 00:16:35.566 ************************************ 00:16:35.566 START TEST even_2G_alloc 00:16:35.566 ************************************ 00:16:35.566 15:35:05 -- common/autotest_common.sh@1111 -- # even_2G_alloc 00:16:35.566 15:35:05 -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:16:35.566 15:35:05 -- setup/hugepages.sh@49 -- # local size=2097152 00:16:35.566 15:35:05 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:16:35.566 15:35:05 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:16:35.566 15:35:05 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:16:35.566 15:35:05 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:16:35.566 15:35:05 -- setup/hugepages.sh@62 -- # user_nodes=() 00:16:35.566 15:35:05 -- setup/hugepages.sh@62 -- # local user_nodes 00:16:35.566 15:35:05 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:16:35.566 15:35:05 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:16:35.566 15:35:05 -- setup/hugepages.sh@67 -- # nodes_test=() 00:16:35.566 15:35:05 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:16:35.566 15:35:05 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:16:35.566 15:35:05 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:16:35.566 15:35:05 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:16:35.566 15:35:05 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1024 00:16:35.566 15:35:05 -- setup/hugepages.sh@83 -- # : 0 00:16:35.566 15:35:05 -- setup/hugepages.sh@84 -- # : 0 00:16:35.566 15:35:05 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:16:35.566 15:35:05 -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:16:35.566 15:35:05 -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:16:35.566 15:35:05 -- setup/hugepages.sh@153 -- # setup output 00:16:35.566 15:35:05 -- setup/common.sh@9 -- # [[ output == output ]] 00:16:35.566 15:35:05 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:35.827 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:35.827 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:35.827 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:35.827 15:35:06 -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:16:35.827 15:35:06 -- setup/hugepages.sh@89 -- # local node 00:16:35.827 15:35:06 -- setup/hugepages.sh@90 -- # local sorted_t 00:16:35.827 15:35:06 -- setup/hugepages.sh@91 -- # local sorted_s 00:16:35.827 15:35:06 -- setup/hugepages.sh@92 -- # local surp 00:16:35.827 15:35:06 -- setup/hugepages.sh@93 -- # local resv 00:16:35.827 15:35:06 -- setup/hugepages.sh@94 -- # local anon 00:16:35.827 15:35:06 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:16:35.827 15:35:06 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:16:35.827 15:35:06 -- setup/common.sh@17 -- # local get=AnonHugePages 00:16:35.827 15:35:06 -- setup/common.sh@18 -- # local node= 00:16:35.827 15:35:06 -- setup/common.sh@19 -- # local var val 00:16:35.827 15:35:06 -- setup/common.sh@20 -- # local mem_f mem 00:16:35.827 15:35:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:35.827 15:35:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:35.827 15:35:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:35.827 15:35:06 -- setup/common.sh@28 -- # mapfile -t mem 00:16:35.827 15:35:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:35.827 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.827 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.827 15:35:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7555040 kB' 'MemAvailable: 9498324 kB' 'Buffers: 2436 kB' 'Cached: 2153256 kB' 'SwapCached: 0 kB' 'Active: 892552 kB' 'Inactive: 1386044 kB' 'Active(anon): 133368 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1386044 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1188 kB' 'Writeback: 0 kB' 'AnonPages: 124500 kB' 'Mapped: 48828 kB' 'Shmem: 10464 kB' 'KReclaimable: 69992 kB' 'Slab: 144648 kB' 'SReclaimable: 69992 kB' 'SUnreclaim: 74656 kB' 'KernelStack: 6592 kB' 'PageTables: 4364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 355892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54980 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:16:35.827 15:35:06 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.827 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.827 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.827 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.827 15:35:06 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.827 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.827 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.827 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.827 15:35:06 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.827 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.827 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.827 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.827 15:35:06 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.827 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.827 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.827 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.827 15:35:06 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.827 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.827 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.827 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.827 15:35:06 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.827 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.827 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.827 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.827 15:35:06 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.827 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.827 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.827 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.827 15:35:06 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.827 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.827 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.827 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.827 15:35:06 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.827 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.827 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.827 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.827 15:35:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.827 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.827 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.827 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.827 15:35:06 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.827 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.827 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.827 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.827 15:35:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.827 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.827 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.827 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.827 15:35:06 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.827 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.827 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.827 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.827 15:35:06 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.827 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.827 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.827 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.827 15:35:06 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.827 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.827 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.827 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.827 15:35:06 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.827 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.828 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.828 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.828 15:35:06 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.828 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.828 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.828 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.828 15:35:06 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.828 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.828 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.828 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.828 15:35:06 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.828 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.828 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.828 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.828 15:35:06 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.828 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.828 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.828 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.828 15:35:06 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.828 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.828 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.828 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.828 15:35:06 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.828 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.828 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.828 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.828 15:35:06 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.828 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.828 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.828 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.828 15:35:06 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.828 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.828 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.828 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.828 15:35:06 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.828 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.828 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.828 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.828 15:35:06 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.828 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.828 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.828 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.828 15:35:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.828 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.828 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.828 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.828 15:35:06 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.828 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.828 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.828 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.828 15:35:06 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.828 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.828 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.828 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.828 15:35:06 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.828 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.828 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.828 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.828 15:35:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.828 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.828 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.828 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.828 15:35:06 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.828 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.828 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.828 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.828 15:35:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.828 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.828 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.828 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.828 15:35:06 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.828 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.828 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.828 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.828 15:35:06 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.828 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.828 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.828 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.828 15:35:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.828 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.828 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.828 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.828 15:35:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.828 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.828 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.828 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.828 15:35:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.828 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.828 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.828 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.828 15:35:06 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.828 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.828 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.828 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.828 15:35:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.828 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.828 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.828 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.828 15:35:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:35.828 15:35:06 -- setup/common.sh@33 -- # echo 0 00:16:35.828 15:35:06 -- setup/common.sh@33 -- # return 0 00:16:35.828 15:35:06 -- setup/hugepages.sh@97 -- # anon=0 00:16:35.828 15:35:06 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:16:35.828 15:35:06 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:16:35.828 15:35:06 -- setup/common.sh@18 -- # local node= 00:16:35.828 15:35:06 -- setup/common.sh@19 -- # local var val 00:16:35.828 15:35:06 -- setup/common.sh@20 -- # local mem_f mem 00:16:35.828 15:35:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:35.828 15:35:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:35.828 15:35:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:35.828 15:35:06 -- setup/common.sh@28 -- # mapfile -t mem 00:16:35.828 15:35:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:35.828 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.828 15:35:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7555040 kB' 'MemAvailable: 9498324 kB' 'Buffers: 2436 kB' 'Cached: 2153256 kB' 'SwapCached: 0 kB' 'Active: 892076 kB' 'Inactive: 1386044 kB' 'Active(anon): 132892 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1386044 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1192 kB' 'Writeback: 0 kB' 'AnonPages: 124248 kB' 'Mapped: 48772 kB' 'Shmem: 10464 kB' 'KReclaimable: 69992 kB' 'Slab: 144648 kB' 'SReclaimable: 69992 kB' 'SUnreclaim: 74656 kB' 'KernelStack: 6656 kB' 'PageTables: 4560 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 355892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54964 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:16:35.828 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.828 15:35:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.828 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.828 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.828 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.828 15:35:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.828 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.828 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.828 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.828 15:35:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.828 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.828 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.828 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.828 15:35:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.828 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.828 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.828 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.828 15:35:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.828 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.828 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.828 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.828 15:35:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.828 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.828 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.828 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.828 15:35:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.828 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.828 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.828 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.828 15:35:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.828 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.828 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.828 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.829 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.829 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.830 15:35:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.830 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.830 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.830 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.830 15:35:06 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.830 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.830 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.830 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.830 15:35:06 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:35.830 15:35:06 -- setup/common.sh@33 -- # echo 0 00:16:35.830 15:35:06 -- setup/common.sh@33 -- # return 0 00:16:35.830 15:35:06 -- setup/hugepages.sh@99 -- # surp=0 00:16:35.830 15:35:06 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:16:35.830 15:35:06 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:16:35.830 15:35:06 -- setup/common.sh@18 -- # local node= 00:16:35.830 15:35:06 -- setup/common.sh@19 -- # local var val 00:16:35.830 15:35:06 -- setup/common.sh@20 -- # local mem_f mem 00:16:35.830 15:35:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:35.830 15:35:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:35.830 15:35:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:35.830 15:35:06 -- setup/common.sh@28 -- # mapfile -t mem 00:16:35.830 15:35:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:35.830 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.830 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.830 15:35:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7555040 kB' 'MemAvailable: 9498324 kB' 'Buffers: 2436 kB' 'Cached: 2153256 kB' 'SwapCached: 0 kB' 'Active: 891824 kB' 'Inactive: 1386044 kB' 'Active(anon): 132640 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1386044 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1192 kB' 'Writeback: 0 kB' 'AnonPages: 124032 kB' 'Mapped: 48840 kB' 'Shmem: 10464 kB' 'KReclaimable: 69992 kB' 'Slab: 144648 kB' 'SReclaimable: 69992 kB' 'SUnreclaim: 74656 kB' 'KernelStack: 6608 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 355892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:16:35.830 15:35:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.830 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.830 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.830 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.830 15:35:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.830 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.830 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.830 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.830 15:35:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.830 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.830 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.830 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.830 15:35:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.830 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.830 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.830 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.830 15:35:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.830 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.830 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.830 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.830 15:35:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.830 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.830 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.830 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.830 15:35:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.830 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.830 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.830 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.830 15:35:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.830 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.830 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.830 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.830 15:35:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.830 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.830 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.830 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.830 15:35:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.830 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.830 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.830 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.830 15:35:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.830 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.830 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.830 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.830 15:35:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.830 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.830 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.830 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.830 15:35:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.830 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.830 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.830 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.830 15:35:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.830 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.830 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.830 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.830 15:35:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.830 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.830 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.830 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.830 15:35:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.830 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.830 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.830 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.830 15:35:06 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.830 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.830 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.830 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.830 15:35:06 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.830 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.830 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.830 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.830 15:35:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.830 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.830 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.830 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.830 15:35:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.830 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.830 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.830 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.830 15:35:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.830 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.830 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.830 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.830 15:35:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.830 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.830 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.830 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.830 15:35:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.830 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.830 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.830 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.830 15:35:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.830 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.830 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.830 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.830 15:35:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.830 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.830 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.830 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.830 15:35:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.830 15:35:06 -- setup/common.sh@32 -- # continue 00:16:35.830 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:35.830 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:35.830 15:35:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:35.830 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.091 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.091 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.091 15:35:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.091 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.091 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.091 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.091 15:35:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.091 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.091 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.091 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.091 15:35:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.091 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.091 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.091 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.091 15:35:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.091 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.091 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.091 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.091 15:35:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.091 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.091 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.091 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.091 15:35:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.091 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.091 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.091 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.091 15:35:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.091 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.091 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.091 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.091 15:35:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.091 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.091 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.091 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.091 15:35:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.091 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.091 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.091 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.091 15:35:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.091 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.091 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.091 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.091 15:35:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.091 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.091 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.091 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.091 15:35:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.091 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.091 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.091 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.091 15:35:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.091 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.091 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.091 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.091 15:35:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.091 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.091 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.091 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.091 15:35:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.091 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.091 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.091 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.091 15:35:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.091 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.091 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.091 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.091 15:35:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.091 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.091 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.091 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.091 15:35:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.091 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.091 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.091 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.091 15:35:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.091 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.091 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.091 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.091 15:35:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.091 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.091 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.091 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.091 15:35:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.091 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.091 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.091 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.091 15:35:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.091 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.091 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.091 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.091 15:35:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.091 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.091 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.091 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.091 15:35:06 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.091 15:35:06 -- setup/common.sh@33 -- # echo 0 00:16:36.091 15:35:06 -- setup/common.sh@33 -- # return 0 00:16:36.091 nr_hugepages=1024 00:16:36.091 resv_hugepages=0 00:16:36.091 surplus_hugepages=0 00:16:36.091 anon_hugepages=0 00:16:36.091 15:35:06 -- setup/hugepages.sh@100 -- # resv=0 00:16:36.091 15:35:06 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:16:36.091 15:35:06 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:16:36.091 15:35:06 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:16:36.091 15:35:06 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:16:36.091 15:35:06 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:16:36.091 15:35:06 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:16:36.091 15:35:06 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:16:36.091 15:35:06 -- setup/common.sh@17 -- # local get=HugePages_Total 00:16:36.091 15:35:06 -- setup/common.sh@18 -- # local node= 00:16:36.091 15:35:06 -- setup/common.sh@19 -- # local var val 00:16:36.091 15:35:06 -- setup/common.sh@20 -- # local mem_f mem 00:16:36.091 15:35:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:36.091 15:35:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:36.091 15:35:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:36.091 15:35:06 -- setup/common.sh@28 -- # mapfile -t mem 00:16:36.091 15:35:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:36.091 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.091 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.092 15:35:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7555300 kB' 'MemAvailable: 9498584 kB' 'Buffers: 2436 kB' 'Cached: 2153256 kB' 'SwapCached: 0 kB' 'Active: 891856 kB' 'Inactive: 1386044 kB' 'Active(anon): 132672 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1386044 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1192 kB' 'Writeback: 0 kB' 'AnonPages: 123816 kB' 'Mapped: 48840 kB' 'Shmem: 10464 kB' 'KReclaimable: 69992 kB' 'Slab: 144652 kB' 'SReclaimable: 69992 kB' 'SUnreclaim: 74660 kB' 'KernelStack: 6624 kB' 'PageTables: 4464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 355892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.092 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.092 15:35:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.093 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.093 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.093 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.093 15:35:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.093 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.093 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.093 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.093 15:35:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.093 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.093 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.093 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.093 15:35:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.093 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.093 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.093 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.093 15:35:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.093 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.093 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.093 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.093 15:35:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.093 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.093 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.093 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.093 15:35:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.093 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.093 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.093 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.093 15:35:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.093 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.093 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.093 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.093 15:35:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.093 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.093 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.093 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.093 15:35:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.093 15:35:06 -- setup/common.sh@33 -- # echo 1024 00:16:36.093 15:35:06 -- setup/common.sh@33 -- # return 0 00:16:36.093 15:35:06 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:16:36.093 15:35:06 -- setup/hugepages.sh@112 -- # get_nodes 00:16:36.093 15:35:06 -- setup/hugepages.sh@27 -- # local node 00:16:36.093 15:35:06 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:16:36.093 15:35:06 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:16:36.093 15:35:06 -- setup/hugepages.sh@32 -- # no_nodes=1 00:16:36.093 15:35:06 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:16:36.093 15:35:06 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:16:36.093 15:35:06 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:16:36.093 15:35:06 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:16:36.093 15:35:06 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:16:36.093 15:35:06 -- setup/common.sh@18 -- # local node=0 00:16:36.093 15:35:06 -- setup/common.sh@19 -- # local var val 00:16:36.093 15:35:06 -- setup/common.sh@20 -- # local mem_f mem 00:16:36.093 15:35:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:36.093 15:35:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:16:36.093 15:35:06 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:16:36.093 15:35:06 -- setup/common.sh@28 -- # mapfile -t mem 00:16:36.093 15:35:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:36.093 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.093 15:35:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7555372 kB' 'MemUsed: 4686608 kB' 'SwapCached: 0 kB' 'Active: 891784 kB' 'Inactive: 1386044 kB' 'Active(anon): 132600 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1386044 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1192 kB' 'Writeback: 0 kB' 'FilePages: 2155692 kB' 'Mapped: 48840 kB' 'AnonPages: 123984 kB' 'Shmem: 10464 kB' 'KernelStack: 6608 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 69992 kB' 'Slab: 144652 kB' 'SReclaimable: 69992 kB' 'SUnreclaim: 74660 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:16:36.093 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.093 15:35:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.093 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.093 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.093 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.093 15:35:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.093 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.093 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.093 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.093 15:35:06 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.093 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.093 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.093 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.093 15:35:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.093 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.093 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.093 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.093 15:35:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.093 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.093 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.093 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.093 15:35:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.093 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.093 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.093 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.093 15:35:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.093 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.093 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.093 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.093 15:35:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.093 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.093 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.093 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.093 15:35:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.093 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.093 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.093 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.093 15:35:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.093 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.093 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.093 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.093 15:35:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.093 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.093 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.093 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.093 15:35:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.093 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.093 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.093 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.093 15:35:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.093 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.093 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.093 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.093 15:35:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.093 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.093 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.093 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.093 15:35:06 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.093 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.093 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.093 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.093 15:35:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.093 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.093 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.093 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.093 15:35:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.093 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.093 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.093 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.093 15:35:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.093 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.093 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.093 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.093 15:35:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.093 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.093 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.093 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.093 15:35:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.093 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.093 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.093 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.093 15:35:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.093 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.093 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.093 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.093 15:35:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.093 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.093 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.093 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.093 15:35:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.093 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.093 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.094 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.094 15:35:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.094 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.094 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.094 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.094 15:35:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.094 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.094 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.094 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.094 15:35:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.094 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.094 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.094 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.094 15:35:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.094 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.094 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.094 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.094 15:35:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.094 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.094 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.094 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.094 15:35:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.094 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.094 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.094 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.094 15:35:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.094 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.094 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.094 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.094 15:35:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.094 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.094 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.094 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.094 15:35:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.094 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.094 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.094 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.094 15:35:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.094 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.094 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.094 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.094 15:35:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.094 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.094 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.094 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.094 15:35:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.094 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.094 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.094 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.094 15:35:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.094 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.094 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.094 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.094 15:35:06 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.094 15:35:06 -- setup/common.sh@33 -- # echo 0 00:16:36.094 15:35:06 -- setup/common.sh@33 -- # return 0 00:16:36.094 node0=1024 expecting 1024 00:16:36.094 15:35:06 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:16:36.094 15:35:06 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:16:36.094 15:35:06 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:16:36.094 15:35:06 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:16:36.094 15:35:06 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:16:36.094 15:35:06 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:16:36.094 00:16:36.094 real 0m0.560s 00:16:36.094 user 0m0.248s 00:16:36.094 sys 0m0.318s 00:16:36.094 15:35:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:36.094 15:35:06 -- common/autotest_common.sh@10 -- # set +x 00:16:36.094 ************************************ 00:16:36.094 END TEST even_2G_alloc 00:16:36.094 ************************************ 00:16:36.094 15:35:06 -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:16:36.094 15:35:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:36.094 15:35:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:36.094 15:35:06 -- common/autotest_common.sh@10 -- # set +x 00:16:36.094 ************************************ 00:16:36.094 START TEST odd_alloc 00:16:36.094 ************************************ 00:16:36.094 15:35:06 -- common/autotest_common.sh@1111 -- # odd_alloc 00:16:36.094 15:35:06 -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:16:36.094 15:35:06 -- setup/hugepages.sh@49 -- # local size=2098176 00:16:36.094 15:35:06 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:16:36.094 15:35:06 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:16:36.094 15:35:06 -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:16:36.094 15:35:06 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:16:36.094 15:35:06 -- setup/hugepages.sh@62 -- # user_nodes=() 00:16:36.094 15:35:06 -- setup/hugepages.sh@62 -- # local user_nodes 00:16:36.094 15:35:06 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:16:36.094 15:35:06 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:16:36.094 15:35:06 -- setup/hugepages.sh@67 -- # nodes_test=() 00:16:36.094 15:35:06 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:16:36.094 15:35:06 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:16:36.094 15:35:06 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:16:36.094 15:35:06 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:16:36.094 15:35:06 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=1025 00:16:36.094 15:35:06 -- setup/hugepages.sh@83 -- # : 0 00:16:36.094 15:35:06 -- setup/hugepages.sh@84 -- # : 0 00:16:36.094 15:35:06 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:16:36.094 15:35:06 -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:16:36.094 15:35:06 -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:16:36.094 15:35:06 -- setup/hugepages.sh@160 -- # setup output 00:16:36.094 15:35:06 -- setup/common.sh@9 -- # [[ output == output ]] 00:16:36.094 15:35:06 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:36.665 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:36.665 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:36.665 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:36.665 15:35:06 -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:16:36.665 15:35:06 -- setup/hugepages.sh@89 -- # local node 00:16:36.665 15:35:06 -- setup/hugepages.sh@90 -- # local sorted_t 00:16:36.665 15:35:06 -- setup/hugepages.sh@91 -- # local sorted_s 00:16:36.665 15:35:06 -- setup/hugepages.sh@92 -- # local surp 00:16:36.665 15:35:06 -- setup/hugepages.sh@93 -- # local resv 00:16:36.665 15:35:06 -- setup/hugepages.sh@94 -- # local anon 00:16:36.665 15:35:06 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:16:36.665 15:35:06 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:16:36.665 15:35:06 -- setup/common.sh@17 -- # local get=AnonHugePages 00:16:36.665 15:35:06 -- setup/common.sh@18 -- # local node= 00:16:36.665 15:35:06 -- setup/common.sh@19 -- # local var val 00:16:36.666 15:35:06 -- setup/common.sh@20 -- # local mem_f mem 00:16:36.666 15:35:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:36.666 15:35:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:36.666 15:35:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:36.666 15:35:06 -- setup/common.sh@28 -- # mapfile -t mem 00:16:36.666 15:35:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:36.666 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.666 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.666 15:35:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7554008 kB' 'MemAvailable: 9497328 kB' 'Buffers: 2436 kB' 'Cached: 2153292 kB' 'SwapCached: 0 kB' 'Active: 892300 kB' 'Inactive: 1386080 kB' 'Active(anon): 133116 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1386080 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1336 kB' 'Writeback: 0 kB' 'AnonPages: 124512 kB' 'Mapped: 48976 kB' 'Shmem: 10464 kB' 'KReclaimable: 69992 kB' 'Slab: 144740 kB' 'SReclaimable: 69992 kB' 'SUnreclaim: 74748 kB' 'KernelStack: 6612 kB' 'PageTables: 4524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 355892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54980 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:16:36.666 15:35:06 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:36.666 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.666 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.666 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.666 15:35:06 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:36.666 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.666 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.666 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.666 15:35:06 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:36.666 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.666 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.666 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.666 15:35:06 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:36.666 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.666 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.666 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.666 15:35:06 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:36.666 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.666 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.666 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.666 15:35:06 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:36.666 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.666 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.666 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.666 15:35:06 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:36.666 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.666 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.666 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.666 15:35:06 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:36.666 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.666 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.666 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.666 15:35:06 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:36.666 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.666 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.666 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.666 15:35:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:36.666 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.666 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.666 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.666 15:35:06 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:36.666 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.666 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.666 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.666 15:35:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:36.666 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.666 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.666 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.666 15:35:06 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:36.666 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.666 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.666 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.666 15:35:06 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:36.666 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.666 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.666 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.666 15:35:06 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:36.666 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.666 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.666 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.666 15:35:06 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:36.666 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.666 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.666 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.666 15:35:06 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:36.666 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.666 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.666 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.666 15:35:06 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:36.666 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.666 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.666 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.666 15:35:06 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:36.666 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.666 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.666 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.666 15:35:06 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:36.666 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.666 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.666 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.666 15:35:06 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:36.666 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.666 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.666 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.666 15:35:06 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:36.666 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.666 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.666 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.666 15:35:06 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:36.666 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.666 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.666 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.666 15:35:06 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:36.666 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.666 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.666 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.666 15:35:06 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:36.666 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.666 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.666 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.666 15:35:06 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:36.666 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.666 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.666 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.666 15:35:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:36.666 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.666 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.666 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.666 15:35:06 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:36.666 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.666 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.666 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.666 15:35:06 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:36.666 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.666 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.666 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.666 15:35:06 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:36.666 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.666 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.666 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.666 15:35:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:36.666 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.666 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.666 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.666 15:35:06 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:36.666 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.666 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.666 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.666 15:35:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:36.666 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.666 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.666 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.666 15:35:06 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:36.666 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.666 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.666 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.666 15:35:06 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:36.667 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.667 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.667 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.667 15:35:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:36.667 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.667 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.667 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.667 15:35:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:36.667 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.667 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.667 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.667 15:35:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:36.667 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.667 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.667 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.667 15:35:06 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:36.667 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.667 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.667 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.667 15:35:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:36.667 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.667 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.667 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.667 15:35:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:36.667 15:35:06 -- setup/common.sh@33 -- # echo 0 00:16:36.667 15:35:06 -- setup/common.sh@33 -- # return 0 00:16:36.667 15:35:06 -- setup/hugepages.sh@97 -- # anon=0 00:16:36.667 15:35:06 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:16:36.667 15:35:06 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:16:36.667 15:35:06 -- setup/common.sh@18 -- # local node= 00:16:36.667 15:35:06 -- setup/common.sh@19 -- # local var val 00:16:36.667 15:35:06 -- setup/common.sh@20 -- # local mem_f mem 00:16:36.667 15:35:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:36.667 15:35:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:36.667 15:35:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:36.667 15:35:06 -- setup/common.sh@28 -- # mapfile -t mem 00:16:36.667 15:35:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:36.667 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.667 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.667 15:35:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7554008 kB' 'MemAvailable: 9497328 kB' 'Buffers: 2436 kB' 'Cached: 2153292 kB' 'SwapCached: 0 kB' 'Active: 892016 kB' 'Inactive: 1386080 kB' 'Active(anon): 132832 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1386080 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1336 kB' 'Writeback: 0 kB' 'AnonPages: 123976 kB' 'Mapped: 48976 kB' 'Shmem: 10464 kB' 'KReclaimable: 69992 kB' 'Slab: 144740 kB' 'SReclaimable: 69992 kB' 'SUnreclaim: 74748 kB' 'KernelStack: 6612 kB' 'PageTables: 4528 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 355892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54996 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:16:36.667 15:35:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.667 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.667 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.667 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.667 15:35:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.667 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.667 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.667 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.667 15:35:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.667 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.667 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.667 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.667 15:35:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.667 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.667 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.667 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.667 15:35:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.667 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.667 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.667 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.667 15:35:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.667 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.667 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.667 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.667 15:35:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.667 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.667 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.667 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.667 15:35:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.667 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.667 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.667 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.667 15:35:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.667 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.667 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.667 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.667 15:35:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.667 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.667 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.667 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.667 15:35:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.667 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.667 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.667 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.667 15:35:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.667 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.667 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.667 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.667 15:35:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.667 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.667 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.667 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.667 15:35:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.667 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.667 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.667 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.667 15:35:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.667 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.667 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.667 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.667 15:35:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.667 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.667 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.667 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.667 15:35:06 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.667 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.667 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.667 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.667 15:35:06 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.667 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.667 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.667 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.667 15:35:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.667 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.667 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.667 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.667 15:35:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.667 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.667 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.667 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.667 15:35:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.667 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.667 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.667 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.667 15:35:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.667 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.667 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.667 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.667 15:35:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.667 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.667 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.667 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.667 15:35:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.667 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.667 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.667 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.667 15:35:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.667 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.667 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.667 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.667 15:35:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.667 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.667 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.667 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.668 15:35:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.668 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.668 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.668 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.668 15:35:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.668 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.668 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.668 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.668 15:35:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.668 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.668 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.668 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.668 15:35:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.668 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.668 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.668 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.668 15:35:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.668 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.668 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.668 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.668 15:35:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.668 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.668 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.668 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.668 15:35:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.668 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.668 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.668 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.668 15:35:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.668 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.668 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.668 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.668 15:35:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.668 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.668 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.668 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.668 15:35:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.668 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.668 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.668 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.668 15:35:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.668 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.668 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.668 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.668 15:35:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.668 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.668 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.668 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.668 15:35:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.668 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.668 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.668 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.668 15:35:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.668 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.668 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.668 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.668 15:35:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.668 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.668 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.668 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.668 15:35:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.668 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.668 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.668 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.668 15:35:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.668 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.668 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.668 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.668 15:35:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.668 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.668 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.668 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.668 15:35:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.668 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.668 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.668 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.668 15:35:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.668 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.668 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.668 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.668 15:35:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.668 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.668 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.668 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.668 15:35:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.668 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.668 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.668 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.668 15:35:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.668 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.668 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.668 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.668 15:35:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.668 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.668 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.668 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.668 15:35:06 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.668 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.668 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.668 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.668 15:35:06 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.668 15:35:06 -- setup/common.sh@33 -- # echo 0 00:16:36.668 15:35:06 -- setup/common.sh@33 -- # return 0 00:16:36.668 15:35:06 -- setup/hugepages.sh@99 -- # surp=0 00:16:36.668 15:35:06 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:16:36.668 15:35:06 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:16:36.668 15:35:06 -- setup/common.sh@18 -- # local node= 00:16:36.668 15:35:06 -- setup/common.sh@19 -- # local var val 00:16:36.668 15:35:06 -- setup/common.sh@20 -- # local mem_f mem 00:16:36.668 15:35:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:36.668 15:35:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:36.668 15:35:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:36.668 15:35:06 -- setup/common.sh@28 -- # mapfile -t mem 00:16:36.668 15:35:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:36.668 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.668 15:35:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7554616 kB' 'MemAvailable: 9497936 kB' 'Buffers: 2436 kB' 'Cached: 2153292 kB' 'SwapCached: 0 kB' 'Active: 892236 kB' 'Inactive: 1386080 kB' 'Active(anon): 133052 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1386080 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1332 kB' 'Writeback: 0 kB' 'AnonPages: 124160 kB' 'Mapped: 48976 kB' 'Shmem: 10464 kB' 'KReclaimable: 69992 kB' 'Slab: 144740 kB' 'SReclaimable: 69992 kB' 'SUnreclaim: 74748 kB' 'KernelStack: 6564 kB' 'PageTables: 4388 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 355892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54980 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:16:36.668 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.668 15:35:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.668 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.668 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.668 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.668 15:35:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.668 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.668 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.668 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.668 15:35:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.668 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.668 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.668 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.668 15:35:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.668 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.668 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.668 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.668 15:35:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.668 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.668 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.668 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.668 15:35:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.668 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.668 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.668 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.668 15:35:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.668 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.669 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.669 15:35:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.670 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.670 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.670 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.670 15:35:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.670 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.670 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.670 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.670 15:35:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.670 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.670 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.670 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.670 15:35:06 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:36.670 15:35:06 -- setup/common.sh@33 -- # echo 0 00:16:36.670 15:35:06 -- setup/common.sh@33 -- # return 0 00:16:36.670 nr_hugepages=1025 00:16:36.670 resv_hugepages=0 00:16:36.670 surplus_hugepages=0 00:16:36.670 15:35:06 -- setup/hugepages.sh@100 -- # resv=0 00:16:36.670 15:35:06 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:16:36.670 15:35:06 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:16:36.670 15:35:06 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:16:36.670 anon_hugepages=0 00:16:36.670 15:35:06 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:16:36.670 15:35:06 -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:16:36.670 15:35:06 -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:16:36.670 15:35:06 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:16:36.670 15:35:06 -- setup/common.sh@17 -- # local get=HugePages_Total 00:16:36.670 15:35:06 -- setup/common.sh@18 -- # local node= 00:16:36.670 15:35:06 -- setup/common.sh@19 -- # local var val 00:16:36.670 15:35:06 -- setup/common.sh@20 -- # local mem_f mem 00:16:36.670 15:35:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:36.670 15:35:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:36.670 15:35:06 -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:36.670 15:35:06 -- setup/common.sh@28 -- # mapfile -t mem 00:16:36.670 15:35:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:36.670 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.670 15:35:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7554616 kB' 'MemAvailable: 9497936 kB' 'Buffers: 2436 kB' 'Cached: 2153292 kB' 'SwapCached: 0 kB' 'Active: 892100 kB' 'Inactive: 1386080 kB' 'Active(anon): 132916 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1386080 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1332 kB' 'Writeback: 0 kB' 'AnonPages: 124024 kB' 'Mapped: 48852 kB' 'Shmem: 10464 kB' 'KReclaimable: 69992 kB' 'Slab: 144732 kB' 'SReclaimable: 69992 kB' 'SUnreclaim: 74740 kB' 'KernelStack: 6608 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13459992 kB' 'Committed_AS: 355892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54996 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:16:36.670 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.670 15:35:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.670 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.670 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.670 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.670 15:35:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.670 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.670 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.670 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.670 15:35:06 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.670 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.670 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.670 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.670 15:35:06 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.670 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.670 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.670 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.670 15:35:06 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.670 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.670 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.670 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.670 15:35:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.670 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.670 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.670 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.670 15:35:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.670 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.670 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.670 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.670 15:35:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.670 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.670 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.670 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.670 15:35:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.670 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.670 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.670 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.670 15:35:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.670 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.670 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.670 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.670 15:35:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.670 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.670 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.670 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.670 15:35:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.670 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.670 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.670 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.670 15:35:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.670 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.670 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.670 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.670 15:35:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.670 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.670 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.670 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.670 15:35:06 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.670 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.670 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.670 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.670 15:35:06 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.670 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.670 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.670 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.670 15:35:06 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.670 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.670 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.670 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.670 15:35:06 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.670 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.670 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.670 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.670 15:35:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.670 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.670 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.670 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.671 15:35:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.671 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.671 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.671 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.671 15:35:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.671 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.671 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.671 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.671 15:35:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.671 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.671 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.671 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.671 15:35:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.671 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.671 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.671 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.671 15:35:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.671 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.671 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.671 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.671 15:35:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.671 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.671 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.671 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.671 15:35:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.671 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.671 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.671 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.671 15:35:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.671 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.671 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.671 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.671 15:35:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.671 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.671 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.671 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.671 15:35:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.671 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.671 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.671 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.671 15:35:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.671 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.671 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.671 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.671 15:35:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.671 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.671 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.671 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.671 15:35:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.671 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.671 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.671 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.671 15:35:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.671 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.671 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.671 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.671 15:35:06 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.671 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.671 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.671 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.671 15:35:06 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.671 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.671 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.671 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.671 15:35:06 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.671 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.671 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.671 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.671 15:35:06 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.671 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.671 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.671 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.671 15:35:06 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.671 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.671 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.671 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.671 15:35:06 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.671 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.671 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.671 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.671 15:35:06 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.671 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.671 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.671 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.671 15:35:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.671 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.671 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.671 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.671 15:35:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.671 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.671 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.671 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.671 15:35:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.671 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.671 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.671 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.671 15:35:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.671 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.671 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.671 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.671 15:35:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.671 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.671 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.671 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.671 15:35:06 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.671 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.671 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.671 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.671 15:35:06 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.671 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.671 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.671 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.671 15:35:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.671 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.671 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.671 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.671 15:35:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:36.671 15:35:06 -- setup/common.sh@33 -- # echo 1025 00:16:36.671 15:35:06 -- setup/common.sh@33 -- # return 0 00:16:36.671 15:35:06 -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:16:36.671 15:35:06 -- setup/hugepages.sh@112 -- # get_nodes 00:16:36.671 15:35:06 -- setup/hugepages.sh@27 -- # local node 00:16:36.671 15:35:06 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:16:36.671 15:35:06 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1025 00:16:36.671 15:35:06 -- setup/hugepages.sh@32 -- # no_nodes=1 00:16:36.671 15:35:06 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:16:36.671 15:35:06 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:16:36.671 15:35:06 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:16:36.671 15:35:06 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:16:36.671 15:35:06 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:16:36.671 15:35:06 -- setup/common.sh@18 -- # local node=0 00:16:36.671 15:35:06 -- setup/common.sh@19 -- # local var val 00:16:36.671 15:35:06 -- setup/common.sh@20 -- # local mem_f mem 00:16:36.671 15:35:06 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:36.671 15:35:06 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:16:36.671 15:35:06 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:16:36.671 15:35:06 -- setup/common.sh@28 -- # mapfile -t mem 00:16:36.671 15:35:06 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:36.671 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.671 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.671 15:35:06 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7554616 kB' 'MemUsed: 4687364 kB' 'SwapCached: 0 kB' 'Active: 891856 kB' 'Inactive: 1386080 kB' 'Active(anon): 132672 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1386080 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1332 kB' 'Writeback: 0 kB' 'FilePages: 2155728 kB' 'Mapped: 48852 kB' 'AnonPages: 123772 kB' 'Shmem: 10464 kB' 'KernelStack: 6608 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 69992 kB' 'Slab: 144732 kB' 'SReclaimable: 69992 kB' 'SUnreclaim: 74740 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Surp: 0' 00:16:36.671 15:35:06 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.671 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.671 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.671 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.671 15:35:06 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.671 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.671 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.671 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.671 15:35:06 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.671 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.672 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.672 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.672 15:35:06 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.672 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.672 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.672 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.672 15:35:06 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.672 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.672 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.672 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.672 15:35:06 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.672 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.672 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.672 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.672 15:35:06 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.672 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.672 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.672 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.672 15:35:06 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.672 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.672 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.672 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.672 15:35:06 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.672 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.672 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.672 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.672 15:35:06 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.672 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.672 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.672 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.672 15:35:06 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.672 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.672 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.672 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.672 15:35:06 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.672 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.672 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.672 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.672 15:35:06 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.672 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.672 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.672 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.672 15:35:06 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.672 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.672 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.672 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.672 15:35:06 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.672 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.672 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.672 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.672 15:35:06 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.672 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.672 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.672 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.672 15:35:06 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.672 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.672 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.672 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.672 15:35:06 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.672 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.672 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.672 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.672 15:35:06 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.672 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.672 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.672 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.672 15:35:06 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.672 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.672 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.672 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.672 15:35:06 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.672 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.672 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.672 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.672 15:35:06 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.672 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.672 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.672 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.672 15:35:06 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.672 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.672 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.672 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.672 15:35:06 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.672 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.672 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.672 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.672 15:35:06 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.672 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.672 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.672 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.672 15:35:06 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.672 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.672 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.672 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.672 15:35:06 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.672 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.672 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.672 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.672 15:35:06 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.672 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.672 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.672 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.672 15:35:06 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.672 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.672 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.672 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.672 15:35:06 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.672 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.672 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.672 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.672 15:35:06 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.672 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.672 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.672 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.672 15:35:06 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.672 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.672 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.672 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.672 15:35:06 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.672 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.672 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.672 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.672 15:35:06 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.672 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.672 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.672 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.672 15:35:06 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.672 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.672 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.672 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.672 15:35:06 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.672 15:35:06 -- setup/common.sh@32 -- # continue 00:16:36.672 15:35:06 -- setup/common.sh@31 -- # IFS=': ' 00:16:36.672 15:35:06 -- setup/common.sh@31 -- # read -r var val _ 00:16:36.672 15:35:06 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:36.672 15:35:06 -- setup/common.sh@33 -- # echo 0 00:16:36.672 15:35:06 -- setup/common.sh@33 -- # return 0 00:16:36.672 15:35:06 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:16:36.672 15:35:06 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:16:36.672 15:35:06 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:16:36.672 15:35:06 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:16:36.672 node0=1025 expecting 1025 00:16:36.672 15:35:06 -- setup/hugepages.sh@128 -- # echo 'node0=1025 expecting 1025' 00:16:36.672 15:35:06 -- setup/hugepages.sh@130 -- # [[ 1025 == \1\0\2\5 ]] 00:16:36.672 00:16:36.672 real 0m0.523s 00:16:36.672 user 0m0.243s 00:16:36.672 sys 0m0.310s 00:16:36.672 15:35:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:36.672 15:35:06 -- common/autotest_common.sh@10 -- # set +x 00:16:36.672 ************************************ 00:16:36.672 END TEST odd_alloc 00:16:36.672 ************************************ 00:16:36.672 15:35:06 -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:16:36.672 15:35:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:36.672 15:35:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:36.672 15:35:06 -- common/autotest_common.sh@10 -- # set +x 00:16:36.932 ************************************ 00:16:36.932 START TEST custom_alloc 00:16:36.932 ************************************ 00:16:36.932 15:35:06 -- common/autotest_common.sh@1111 -- # custom_alloc 00:16:36.932 15:35:06 -- setup/hugepages.sh@167 -- # local IFS=, 00:16:36.932 15:35:06 -- setup/hugepages.sh@169 -- # local node 00:16:36.932 15:35:06 -- setup/hugepages.sh@170 -- # nodes_hp=() 00:16:36.932 15:35:06 -- setup/hugepages.sh@170 -- # local nodes_hp 00:16:36.932 15:35:06 -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:16:36.932 15:35:06 -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:16:36.932 15:35:06 -- setup/hugepages.sh@49 -- # local size=1048576 00:16:36.932 15:35:06 -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:16:36.932 15:35:06 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:16:36.932 15:35:06 -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:16:36.932 15:35:06 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:16:36.932 15:35:06 -- setup/hugepages.sh@62 -- # user_nodes=() 00:16:36.932 15:35:06 -- setup/hugepages.sh@62 -- # local user_nodes 00:16:36.932 15:35:06 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:16:36.932 15:35:06 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:16:36.932 15:35:06 -- setup/hugepages.sh@67 -- # nodes_test=() 00:16:36.932 15:35:06 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:16:36.932 15:35:06 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:16:36.932 15:35:06 -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:16:36.932 15:35:06 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:16:36.932 15:35:06 -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:16:36.932 15:35:06 -- setup/hugepages.sh@83 -- # : 0 00:16:36.932 15:35:06 -- setup/hugepages.sh@84 -- # : 0 00:16:36.932 15:35:06 -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:16:36.932 15:35:06 -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:16:36.932 15:35:06 -- setup/hugepages.sh@176 -- # (( 1 > 1 )) 00:16:36.932 15:35:06 -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:16:36.932 15:35:06 -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:16:36.932 15:35:06 -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:16:36.932 15:35:06 -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:16:36.932 15:35:06 -- setup/hugepages.sh@62 -- # user_nodes=() 00:16:36.932 15:35:06 -- setup/hugepages.sh@62 -- # local user_nodes 00:16:36.932 15:35:06 -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:16:36.932 15:35:06 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:16:36.932 15:35:06 -- setup/hugepages.sh@67 -- # nodes_test=() 00:16:36.932 15:35:06 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:16:36.932 15:35:06 -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:16:36.932 15:35:06 -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:16:36.932 15:35:06 -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:16:36.932 15:35:06 -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:16:36.932 15:35:06 -- setup/hugepages.sh@78 -- # return 0 00:16:36.932 15:35:06 -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512' 00:16:36.932 15:35:06 -- setup/hugepages.sh@187 -- # setup output 00:16:36.932 15:35:06 -- setup/common.sh@9 -- # [[ output == output ]] 00:16:36.932 15:35:06 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:37.195 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:37.195 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:37.195 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:37.195 15:35:07 -- setup/hugepages.sh@188 -- # nr_hugepages=512 00:16:37.195 15:35:07 -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:16:37.195 15:35:07 -- setup/hugepages.sh@89 -- # local node 00:16:37.195 15:35:07 -- setup/hugepages.sh@90 -- # local sorted_t 00:16:37.195 15:35:07 -- setup/hugepages.sh@91 -- # local sorted_s 00:16:37.195 15:35:07 -- setup/hugepages.sh@92 -- # local surp 00:16:37.195 15:35:07 -- setup/hugepages.sh@93 -- # local resv 00:16:37.195 15:35:07 -- setup/hugepages.sh@94 -- # local anon 00:16:37.195 15:35:07 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:16:37.195 15:35:07 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:16:37.195 15:35:07 -- setup/common.sh@17 -- # local get=AnonHugePages 00:16:37.195 15:35:07 -- setup/common.sh@18 -- # local node= 00:16:37.195 15:35:07 -- setup/common.sh@19 -- # local var val 00:16:37.195 15:35:07 -- setup/common.sh@20 -- # local mem_f mem 00:16:37.195 15:35:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:37.195 15:35:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:37.195 15:35:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:37.195 15:35:07 -- setup/common.sh@28 -- # mapfile -t mem 00:16:37.195 15:35:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:37.195 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.195 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.195 15:35:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8599676 kB' 'MemAvailable: 10543004 kB' 'Buffers: 2436 kB' 'Cached: 2153300 kB' 'SwapCached: 0 kB' 'Active: 892472 kB' 'Inactive: 1386088 kB' 'Active(anon): 133288 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1386088 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1480 kB' 'Writeback: 0 kB' 'AnonPages: 124756 kB' 'Mapped: 49020 kB' 'Shmem: 10464 kB' 'KReclaimable: 69992 kB' 'Slab: 144692 kB' 'SReclaimable: 69992 kB' 'SUnreclaim: 74700 kB' 'KernelStack: 6612 kB' 'PageTables: 4508 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 356020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 55012 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:16:37.195 15:35:07 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.195 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.195 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.195 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.195 15:35:07 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.195 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.195 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.195 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.195 15:35:07 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.195 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.195 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.195 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.195 15:35:07 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.195 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.195 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.195 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.195 15:35:07 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.195 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.195 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.195 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.195 15:35:07 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.195 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.195 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.195 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.195 15:35:07 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.195 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.195 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.195 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.195 15:35:07 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.195 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.195 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.195 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.195 15:35:07 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.195 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.195 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.195 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.195 15:35:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.195 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.195 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.195 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.195 15:35:07 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.195 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.195 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.195 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.195 15:35:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.195 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.195 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.195 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.195 15:35:07 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.195 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.195 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.195 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.195 15:35:07 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.195 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.195 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.196 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.196 15:35:07 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.196 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.196 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.196 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.196 15:35:07 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.196 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.196 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.196 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.196 15:35:07 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.196 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.196 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.196 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.196 15:35:07 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.196 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.196 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.196 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.196 15:35:07 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.196 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.196 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.196 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.196 15:35:07 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.196 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.196 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.196 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.196 15:35:07 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.196 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.196 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.196 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.196 15:35:07 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.196 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.196 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.196 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.196 15:35:07 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.196 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.196 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.196 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.196 15:35:07 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.196 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.196 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.196 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.196 15:35:07 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.196 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.196 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.196 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.196 15:35:07 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.196 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.196 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.196 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.196 15:35:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.196 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.196 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.196 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.196 15:35:07 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.196 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.196 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.196 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.196 15:35:07 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.196 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.196 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.196 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.196 15:35:07 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.196 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.196 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.196 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.196 15:35:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.196 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.196 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.196 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.196 15:35:07 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.196 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.196 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.196 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.196 15:35:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.196 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.196 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.196 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.196 15:35:07 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.196 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.196 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.196 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.196 15:35:07 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.196 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.196 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.196 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.196 15:35:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.196 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.196 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.196 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.196 15:35:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.196 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.196 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.196 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.196 15:35:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.196 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.196 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.196 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.196 15:35:07 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.196 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.196 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.196 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.196 15:35:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.196 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.196 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.196 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.196 15:35:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.196 15:35:07 -- setup/common.sh@33 -- # echo 0 00:16:37.196 15:35:07 -- setup/common.sh@33 -- # return 0 00:16:37.196 15:35:07 -- setup/hugepages.sh@97 -- # anon=0 00:16:37.196 15:35:07 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:16:37.196 15:35:07 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:16:37.196 15:35:07 -- setup/common.sh@18 -- # local node= 00:16:37.196 15:35:07 -- setup/common.sh@19 -- # local var val 00:16:37.196 15:35:07 -- setup/common.sh@20 -- # local mem_f mem 00:16:37.196 15:35:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:37.196 15:35:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:37.196 15:35:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:37.196 15:35:07 -- setup/common.sh@28 -- # mapfile -t mem 00:16:37.196 15:35:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:37.196 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.196 15:35:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8599676 kB' 'MemAvailable: 10543004 kB' 'Buffers: 2436 kB' 'Cached: 2153300 kB' 'SwapCached: 0 kB' 'Active: 892352 kB' 'Inactive: 1386088 kB' 'Active(anon): 133168 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1386088 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1480 kB' 'Writeback: 0 kB' 'AnonPages: 124312 kB' 'Mapped: 48960 kB' 'Shmem: 10464 kB' 'KReclaimable: 69992 kB' 'Slab: 144692 kB' 'SReclaimable: 69992 kB' 'SUnreclaim: 74700 kB' 'KernelStack: 6580 kB' 'PageTables: 4396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 356020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54980 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:16:37.196 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.196 15:35:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.196 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.196 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.196 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.196 15:35:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.196 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.196 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.196 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.196 15:35:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.196 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.196 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.196 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.196 15:35:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.196 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.196 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.196 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.196 15:35:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.196 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.196 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.196 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.196 15:35:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.196 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.196 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.196 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.196 15:35:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.197 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.197 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.198 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.198 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.198 15:35:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.198 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.198 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.198 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.198 15:35:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.198 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.198 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.198 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.198 15:35:07 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.198 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.198 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.198 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.198 15:35:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.198 15:35:07 -- setup/common.sh@33 -- # echo 0 00:16:37.198 15:35:07 -- setup/common.sh@33 -- # return 0 00:16:37.198 15:35:07 -- setup/hugepages.sh@99 -- # surp=0 00:16:37.198 15:35:07 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:16:37.198 15:35:07 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:16:37.198 15:35:07 -- setup/common.sh@18 -- # local node= 00:16:37.198 15:35:07 -- setup/common.sh@19 -- # local var val 00:16:37.198 15:35:07 -- setup/common.sh@20 -- # local mem_f mem 00:16:37.198 15:35:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:37.198 15:35:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:37.198 15:35:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:37.198 15:35:07 -- setup/common.sh@28 -- # mapfile -t mem 00:16:37.198 15:35:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:37.198 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.198 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.198 15:35:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8599676 kB' 'MemAvailable: 10543004 kB' 'Buffers: 2436 kB' 'Cached: 2153300 kB' 'SwapCached: 0 kB' 'Active: 892088 kB' 'Inactive: 1386088 kB' 'Active(anon): 132904 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1386088 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1480 kB' 'Writeback: 0 kB' 'AnonPages: 123980 kB' 'Mapped: 48864 kB' 'Shmem: 10464 kB' 'KReclaimable: 69992 kB' 'Slab: 144688 kB' 'SReclaimable: 69992 kB' 'SUnreclaim: 74696 kB' 'KernelStack: 6576 kB' 'PageTables: 4308 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 356020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54980 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:16:37.198 15:35:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.198 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.198 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.198 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.198 15:35:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.198 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.198 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.198 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.198 15:35:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.198 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.198 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.198 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.198 15:35:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.198 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.198 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.198 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.198 15:35:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.198 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.198 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.198 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.198 15:35:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.198 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.198 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.198 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.198 15:35:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.198 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.198 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.198 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.198 15:35:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.198 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.198 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.198 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.198 15:35:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.198 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.198 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.198 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.198 15:35:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.198 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.198 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.198 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.198 15:35:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.198 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.198 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.198 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.198 15:35:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.198 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.198 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.198 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.198 15:35:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.198 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.198 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.198 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.198 15:35:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.198 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.198 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.198 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.198 15:35:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.198 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.198 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.198 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.198 15:35:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.198 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.198 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.198 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.198 15:35:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.198 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.198 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.198 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.198 15:35:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.198 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.198 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.198 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.198 15:35:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.198 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.198 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.198 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.198 15:35:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.198 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.198 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.198 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.198 15:35:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.198 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.198 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.198 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.198 15:35:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.198 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.198 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.198 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.198 15:35:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.198 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.198 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.198 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.198 15:35:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.198 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.198 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.198 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.198 15:35:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.198 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.198 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.198 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.198 15:35:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.198 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.198 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.198 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.198 15:35:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.198 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.198 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.198 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.198 15:35:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.198 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.198 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.198 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.198 15:35:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.198 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.199 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.199 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.199 15:35:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.199 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.199 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.199 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.199 15:35:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.199 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.199 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.199 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.199 15:35:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.199 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.199 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.199 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.199 15:35:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.199 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.199 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.199 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.199 15:35:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.199 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.199 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.199 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.199 15:35:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.199 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.199 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.199 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.199 15:35:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.199 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.199 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.199 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.199 15:35:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.199 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.199 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.199 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.199 15:35:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.199 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.199 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.199 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.199 15:35:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.199 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.199 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.199 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.199 15:35:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.199 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.199 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.199 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.199 15:35:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.199 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.199 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.199 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.199 15:35:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.199 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.199 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.199 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.199 15:35:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.199 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.199 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.199 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.199 15:35:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.199 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.199 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.199 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.199 15:35:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.199 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.199 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.199 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.199 15:35:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.199 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.199 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.199 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.199 15:35:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.199 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.199 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.199 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.199 15:35:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.199 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.199 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.199 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.199 15:35:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.199 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.199 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.199 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.199 15:35:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.199 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.199 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.199 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.199 15:35:07 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.199 15:35:07 -- setup/common.sh@33 -- # echo 0 00:16:37.199 15:35:07 -- setup/common.sh@33 -- # return 0 00:16:37.199 15:35:07 -- setup/hugepages.sh@100 -- # resv=0 00:16:37.199 nr_hugepages=512 00:16:37.199 15:35:07 -- setup/hugepages.sh@102 -- # echo nr_hugepages=512 00:16:37.199 resv_hugepages=0 00:16:37.199 15:35:07 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:16:37.199 surplus_hugepages=0 00:16:37.199 15:35:07 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:16:37.199 anon_hugepages=0 00:16:37.199 15:35:07 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:16:37.199 15:35:07 -- setup/hugepages.sh@107 -- # (( 512 == nr_hugepages + surp + resv )) 00:16:37.199 15:35:07 -- setup/hugepages.sh@109 -- # (( 512 == nr_hugepages )) 00:16:37.199 15:35:07 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:16:37.199 15:35:07 -- setup/common.sh@17 -- # local get=HugePages_Total 00:16:37.199 15:35:07 -- setup/common.sh@18 -- # local node= 00:16:37.199 15:35:07 -- setup/common.sh@19 -- # local var val 00:16:37.199 15:35:07 -- setup/common.sh@20 -- # local mem_f mem 00:16:37.199 15:35:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:37.199 15:35:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:37.199 15:35:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:37.199 15:35:07 -- setup/common.sh@28 -- # mapfile -t mem 00:16:37.199 15:35:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:37.199 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.199 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.199 15:35:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8599676 kB' 'MemAvailable: 10543004 kB' 'Buffers: 2436 kB' 'Cached: 2153300 kB' 'SwapCached: 0 kB' 'Active: 892116 kB' 'Inactive: 1386088 kB' 'Active(anon): 132932 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1386088 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1480 kB' 'Writeback: 0 kB' 'AnonPages: 124036 kB' 'Mapped: 48864 kB' 'Shmem: 10464 kB' 'KReclaimable: 69992 kB' 'Slab: 144688 kB' 'SReclaimable: 69992 kB' 'SUnreclaim: 74696 kB' 'KernelStack: 6608 kB' 'PageTables: 4412 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13985304 kB' 'Committed_AS: 356020 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54980 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 1048576 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:16:37.199 15:35:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.199 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.199 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.199 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.199 15:35:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.199 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.199 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.199 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.199 15:35:07 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.199 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.199 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.199 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.199 15:35:07 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.199 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.199 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.199 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.199 15:35:07 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.199 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.199 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.199 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.199 15:35:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.199 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.199 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.199 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.199 15:35:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.199 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.199 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.199 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.199 15:35:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.199 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.199 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.199 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.199 15:35:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.199 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.200 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.200 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.200 15:35:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.200 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.200 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.200 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.200 15:35:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.200 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.200 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.200 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.200 15:35:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.200 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.200 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.200 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.200 15:35:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.200 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.200 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.200 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.200 15:35:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.200 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.200 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.200 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.200 15:35:07 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.200 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.200 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.200 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.200 15:35:07 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.200 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.200 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.200 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.200 15:35:07 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.200 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.200 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.200 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.200 15:35:07 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.200 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.200 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.200 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.200 15:35:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.200 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.200 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.200 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.200 15:35:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.200 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.200 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.200 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.200 15:35:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.200 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.200 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.200 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.200 15:35:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.200 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.200 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.200 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.200 15:35:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.200 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.200 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.200 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.200 15:35:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.200 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.200 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.200 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.200 15:35:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.200 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.200 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.200 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.200 15:35:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.200 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.200 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.200 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.200 15:35:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.200 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.200 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.200 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.200 15:35:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.200 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.200 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.200 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.200 15:35:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.200 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.200 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.200 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.200 15:35:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.200 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.200 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.200 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.200 15:35:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.200 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.200 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.200 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.200 15:35:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.200 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.200 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.200 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.200 15:35:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.200 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.200 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.200 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.200 15:35:07 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.200 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.200 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.200 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.200 15:35:07 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.200 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.200 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.200 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.200 15:35:07 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.200 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.200 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.200 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.200 15:35:07 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.200 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.200 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.200 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.200 15:35:07 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.200 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.200 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.200 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.200 15:35:07 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.200 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.200 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.200 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.200 15:35:07 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.200 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.200 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.200 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.200 15:35:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.200 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.200 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.200 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.200 15:35:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.201 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.201 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.201 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.201 15:35:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.201 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.201 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.201 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.201 15:35:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.201 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.201 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.201 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.201 15:35:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.201 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.201 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.201 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.201 15:35:07 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.201 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.201 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.201 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.201 15:35:07 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.201 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.201 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.201 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.201 15:35:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.201 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.201 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.201 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.201 15:35:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.201 15:35:07 -- setup/common.sh@33 -- # echo 512 00:16:37.201 15:35:07 -- setup/common.sh@33 -- # return 0 00:16:37.201 15:35:07 -- setup/hugepages.sh@110 -- # (( 512 == nr_hugepages + surp + resv )) 00:16:37.201 15:35:07 -- setup/hugepages.sh@112 -- # get_nodes 00:16:37.201 15:35:07 -- setup/hugepages.sh@27 -- # local node 00:16:37.201 15:35:07 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:16:37.201 15:35:07 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:16:37.201 15:35:07 -- setup/hugepages.sh@32 -- # no_nodes=1 00:16:37.201 15:35:07 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:16:37.201 15:35:07 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:16:37.201 15:35:07 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:16:37.201 15:35:07 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:16:37.201 15:35:07 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:16:37.201 15:35:07 -- setup/common.sh@18 -- # local node=0 00:16:37.201 15:35:07 -- setup/common.sh@19 -- # local var val 00:16:37.201 15:35:07 -- setup/common.sh@20 -- # local mem_f mem 00:16:37.201 15:35:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:37.201 15:35:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:16:37.201 15:35:07 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:16:37.201 15:35:07 -- setup/common.sh@28 -- # mapfile -t mem 00:16:37.201 15:35:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:37.201 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.201 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.201 15:35:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 8599952 kB' 'MemUsed: 3642028 kB' 'SwapCached: 0 kB' 'Active: 891896 kB' 'Inactive: 1386088 kB' 'Active(anon): 132712 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1386088 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1480 kB' 'Writeback: 0 kB' 'FilePages: 2155736 kB' 'Mapped: 48864 kB' 'AnonPages: 124108 kB' 'Shmem: 10464 kB' 'KernelStack: 6624 kB' 'PageTables: 4464 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 69992 kB' 'Slab: 144688 kB' 'SReclaimable: 69992 kB' 'SUnreclaim: 74696 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:16:37.201 15:35:07 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.201 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.201 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.201 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.201 15:35:07 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.201 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.201 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.201 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.201 15:35:07 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.201 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.201 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.201 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.201 15:35:07 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.201 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.201 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.201 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.201 15:35:07 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.201 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.201 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.201 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.201 15:35:07 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.201 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.201 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.201 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.461 15:35:07 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.461 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.461 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.461 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.461 15:35:07 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.461 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.461 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.461 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.461 15:35:07 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.461 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.461 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.461 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.461 15:35:07 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.461 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.461 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.461 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.461 15:35:07 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.461 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.461 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.461 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.461 15:35:07 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.461 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.461 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.461 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.461 15:35:07 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.461 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.461 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.461 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.461 15:35:07 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.461 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.461 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.461 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.461 15:35:07 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.461 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.461 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.461 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.461 15:35:07 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.461 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.461 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.461 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.461 15:35:07 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.461 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.461 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.461 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.461 15:35:07 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.461 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.461 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.461 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.461 15:35:07 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.461 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.461 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.461 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.461 15:35:07 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.461 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.461 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.461 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.461 15:35:07 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.461 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.461 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.461 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.461 15:35:07 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.461 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.461 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.461 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.461 15:35:07 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.461 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.461 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.461 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.461 15:35:07 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.461 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.461 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.461 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.461 15:35:07 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.462 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.462 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.462 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.462 15:35:07 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.462 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.462 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.462 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.462 15:35:07 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.462 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.462 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.462 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.462 15:35:07 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.462 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.462 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.462 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.462 15:35:07 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.462 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.462 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.462 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.462 15:35:07 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.462 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.462 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.462 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.462 15:35:07 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.462 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.462 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.462 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.462 15:35:07 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.462 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.462 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.462 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.462 15:35:07 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.462 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.462 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.462 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.462 15:35:07 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.462 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.462 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.462 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.462 15:35:07 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.462 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.462 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.462 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.462 15:35:07 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.462 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.462 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.462 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.462 15:35:07 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.462 15:35:07 -- setup/common.sh@33 -- # echo 0 00:16:37.462 15:35:07 -- setup/common.sh@33 -- # return 0 00:16:37.462 15:35:07 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:16:37.462 15:35:07 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:16:37.462 15:35:07 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:16:37.462 15:35:07 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:16:37.462 15:35:07 -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:16:37.462 node0=512 expecting 512 00:16:37.462 15:35:07 -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:16:37.462 00:16:37.462 real 0m0.531s 00:16:37.462 user 0m0.297s 00:16:37.462 sys 0m0.267s 00:16:37.462 15:35:07 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:37.462 15:35:07 -- common/autotest_common.sh@10 -- # set +x 00:16:37.462 ************************************ 00:16:37.462 END TEST custom_alloc 00:16:37.462 ************************************ 00:16:37.462 15:35:07 -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:16:37.462 15:35:07 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:37.462 15:35:07 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:37.462 15:35:07 -- common/autotest_common.sh@10 -- # set +x 00:16:37.462 ************************************ 00:16:37.462 START TEST no_shrink_alloc 00:16:37.462 ************************************ 00:16:37.462 15:35:07 -- common/autotest_common.sh@1111 -- # no_shrink_alloc 00:16:37.462 15:35:07 -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:16:37.462 15:35:07 -- setup/hugepages.sh@49 -- # local size=2097152 00:16:37.462 15:35:07 -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:16:37.462 15:35:07 -- setup/hugepages.sh@51 -- # shift 00:16:37.462 15:35:07 -- setup/hugepages.sh@52 -- # node_ids=('0') 00:16:37.462 15:35:07 -- setup/hugepages.sh@52 -- # local node_ids 00:16:37.462 15:35:07 -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:16:37.462 15:35:07 -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:16:37.462 15:35:07 -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:16:37.462 15:35:07 -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:16:37.462 15:35:07 -- setup/hugepages.sh@62 -- # local user_nodes 00:16:37.462 15:35:07 -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:16:37.462 15:35:07 -- setup/hugepages.sh@65 -- # local _no_nodes=1 00:16:37.462 15:35:07 -- setup/hugepages.sh@67 -- # nodes_test=() 00:16:37.462 15:35:07 -- setup/hugepages.sh@67 -- # local -g nodes_test 00:16:37.462 15:35:07 -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:16:37.462 15:35:07 -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:16:37.462 15:35:07 -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:16:37.462 15:35:07 -- setup/hugepages.sh@73 -- # return 0 00:16:37.462 15:35:07 -- setup/hugepages.sh@198 -- # setup output 00:16:37.462 15:35:07 -- setup/common.sh@9 -- # [[ output == output ]] 00:16:37.462 15:35:07 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:37.721 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:37.721 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:37.721 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:37.721 15:35:07 -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:16:37.721 15:35:07 -- setup/hugepages.sh@89 -- # local node 00:16:37.721 15:35:07 -- setup/hugepages.sh@90 -- # local sorted_t 00:16:37.721 15:35:07 -- setup/hugepages.sh@91 -- # local sorted_s 00:16:37.721 15:35:07 -- setup/hugepages.sh@92 -- # local surp 00:16:37.721 15:35:07 -- setup/hugepages.sh@93 -- # local resv 00:16:37.721 15:35:07 -- setup/hugepages.sh@94 -- # local anon 00:16:37.721 15:35:07 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:16:37.721 15:35:07 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:16:37.721 15:35:07 -- setup/common.sh@17 -- # local get=AnonHugePages 00:16:37.721 15:35:07 -- setup/common.sh@18 -- # local node= 00:16:37.721 15:35:07 -- setup/common.sh@19 -- # local var val 00:16:37.721 15:35:07 -- setup/common.sh@20 -- # local mem_f mem 00:16:37.721 15:35:07 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:37.721 15:35:07 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:37.721 15:35:07 -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:37.721 15:35:07 -- setup/common.sh@28 -- # mapfile -t mem 00:16:37.721 15:35:07 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:37.721 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.721 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.721 15:35:07 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7546780 kB' 'MemAvailable: 9490108 kB' 'Buffers: 2436 kB' 'Cached: 2153300 kB' 'SwapCached: 0 kB' 'Active: 892176 kB' 'Inactive: 1386088 kB' 'Active(anon): 132992 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1386088 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1612 kB' 'Writeback: 0 kB' 'AnonPages: 124120 kB' 'Mapped: 48920 kB' 'Shmem: 10464 kB' 'KReclaimable: 69992 kB' 'Slab: 144644 kB' 'SReclaimable: 69992 kB' 'SUnreclaim: 74652 kB' 'KernelStack: 6628 kB' 'PageTables: 4544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 356016 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54996 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:16:37.721 15:35:07 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.721 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.721 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.721 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.722 15:35:07 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.722 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.722 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.722 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.722 15:35:07 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.722 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.722 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.722 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.722 15:35:07 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.722 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.722 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.722 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.722 15:35:07 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.722 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.722 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.722 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.722 15:35:07 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.722 15:35:07 -- setup/common.sh@32 -- # continue 00:16:37.722 15:35:07 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.722 15:35:07 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.722 15:35:08 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.722 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.722 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.722 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.722 15:35:08 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.722 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.722 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.722 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.722 15:35:08 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.722 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.722 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.722 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.722 15:35:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.722 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.722 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.722 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.722 15:35:08 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.722 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.722 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.722 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.722 15:35:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.722 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.722 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.722 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.722 15:35:08 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.722 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.722 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.722 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.722 15:35:08 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.722 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.722 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.722 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.722 15:35:08 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.722 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.722 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.722 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.722 15:35:08 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.722 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.722 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.722 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.722 15:35:08 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.722 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.722 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.722 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.722 15:35:08 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.722 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.722 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.722 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.722 15:35:08 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.722 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.722 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.722 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.722 15:35:08 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.722 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.722 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.722 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.722 15:35:08 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.722 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.722 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.722 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.722 15:35:08 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.722 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.722 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.722 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.722 15:35:08 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.722 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.722 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.722 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.722 15:35:08 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.722 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.722 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.722 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.722 15:35:08 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.722 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.722 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.722 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.722 15:35:08 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.722 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.722 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.722 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.722 15:35:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.722 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.722 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.722 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.722 15:35:08 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.722 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.722 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.722 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.722 15:35:08 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.722 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.722 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.722 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.722 15:35:08 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.722 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.722 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.722 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.984 15:35:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.984 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.984 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.984 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.984 15:35:08 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.984 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.984 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.984 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.984 15:35:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.984 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.984 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.984 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.984 15:35:08 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.984 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.984 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.984 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.984 15:35:08 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.984 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.984 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.984 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.984 15:35:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.984 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.984 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.984 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.984 15:35:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.984 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.984 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.984 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.984 15:35:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.984 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.984 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.984 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.984 15:35:08 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.984 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.984 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.984 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.984 15:35:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.984 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.984 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.984 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.984 15:35:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:37.984 15:35:08 -- setup/common.sh@33 -- # echo 0 00:16:37.984 15:35:08 -- setup/common.sh@33 -- # return 0 00:16:37.984 15:35:08 -- setup/hugepages.sh@97 -- # anon=0 00:16:37.984 15:35:08 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:16:37.984 15:35:08 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:16:37.984 15:35:08 -- setup/common.sh@18 -- # local node= 00:16:37.984 15:35:08 -- setup/common.sh@19 -- # local var val 00:16:37.984 15:35:08 -- setup/common.sh@20 -- # local mem_f mem 00:16:37.984 15:35:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:37.984 15:35:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:37.984 15:35:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:37.984 15:35:08 -- setup/common.sh@28 -- # mapfile -t mem 00:16:37.984 15:35:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:37.984 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.984 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.984 15:35:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7546780 kB' 'MemAvailable: 9490108 kB' 'Buffers: 2436 kB' 'Cached: 2153300 kB' 'SwapCached: 0 kB' 'Active: 892068 kB' 'Inactive: 1386088 kB' 'Active(anon): 132884 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1386088 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1612 kB' 'Writeback: 0 kB' 'AnonPages: 124184 kB' 'Mapped: 49000 kB' 'Shmem: 10464 kB' 'KReclaimable: 69992 kB' 'Slab: 144640 kB' 'SReclaimable: 69992 kB' 'SUnreclaim: 74648 kB' 'KernelStack: 6628 kB' 'PageTables: 4536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 355652 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54964 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:16:37.984 15:35:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.984 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.984 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.984 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.984 15:35:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.984 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.984 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.984 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.984 15:35:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.984 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.984 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.984 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.984 15:35:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.984 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.984 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.984 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.984 15:35:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.984 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.984 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.984 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.984 15:35:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.984 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.984 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.984 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.984 15:35:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.984 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.984 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.985 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.985 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.986 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.986 15:35:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.986 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.986 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.986 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.986 15:35:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.986 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.986 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.986 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.986 15:35:08 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.986 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.986 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.986 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.986 15:35:08 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.986 15:35:08 -- setup/common.sh@33 -- # echo 0 00:16:37.986 15:35:08 -- setup/common.sh@33 -- # return 0 00:16:37.986 15:35:08 -- setup/hugepages.sh@99 -- # surp=0 00:16:37.986 15:35:08 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:16:37.986 15:35:08 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:16:37.986 15:35:08 -- setup/common.sh@18 -- # local node= 00:16:37.986 15:35:08 -- setup/common.sh@19 -- # local var val 00:16:37.986 15:35:08 -- setup/common.sh@20 -- # local mem_f mem 00:16:37.986 15:35:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:37.986 15:35:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:37.986 15:35:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:37.986 15:35:08 -- setup/common.sh@28 -- # mapfile -t mem 00:16:37.986 15:35:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:37.986 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.986 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.986 15:35:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7547032 kB' 'MemAvailable: 9490360 kB' 'Buffers: 2436 kB' 'Cached: 2153300 kB' 'SwapCached: 0 kB' 'Active: 891656 kB' 'Inactive: 1386088 kB' 'Active(anon): 132472 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1386088 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1612 kB' 'Writeback: 0 kB' 'AnonPages: 123640 kB' 'Mapped: 49000 kB' 'Shmem: 10464 kB' 'KReclaimable: 69992 kB' 'Slab: 144640 kB' 'SReclaimable: 69992 kB' 'SUnreclaim: 74648 kB' 'KernelStack: 6532 kB' 'PageTables: 4240 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 356016 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54948 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:16:37.986 15:35:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.986 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.986 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.986 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.986 15:35:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.986 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.986 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.986 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.986 15:35:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.986 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.986 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.986 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.986 15:35:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.986 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.986 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.986 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.986 15:35:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.986 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.986 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.986 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.986 15:35:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.986 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.986 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.986 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.986 15:35:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.986 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.986 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.986 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.986 15:35:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.986 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.986 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.986 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.986 15:35:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.986 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.986 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.986 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.986 15:35:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.986 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.986 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.986 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.986 15:35:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.986 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.986 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.986 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.986 15:35:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.986 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.986 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.986 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.986 15:35:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.986 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.986 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.986 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.986 15:35:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.986 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.986 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.986 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.986 15:35:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.986 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.986 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.986 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.986 15:35:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.986 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.986 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.986 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.986 15:35:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.986 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.986 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.986 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.986 15:35:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.986 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.986 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.986 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.986 15:35:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.986 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.986 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.986 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.986 15:35:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.986 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.986 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.986 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.986 15:35:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.986 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.986 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.986 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.986 15:35:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.986 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.986 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.986 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.986 15:35:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.986 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.986 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.986 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.986 15:35:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.986 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.986 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.986 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.986 15:35:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.986 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.986 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.986 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.986 15:35:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.986 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.986 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.986 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.986 15:35:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.986 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.986 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.986 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.986 15:35:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.986 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.986 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.986 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.986 15:35:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.986 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.986 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.987 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.987 15:35:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.987 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.987 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.987 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.987 15:35:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.987 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.987 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.987 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.987 15:35:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.987 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.987 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.987 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.987 15:35:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.987 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.987 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.987 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.987 15:35:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.987 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.987 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.987 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.987 15:35:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.987 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.987 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.987 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.987 15:35:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.987 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.987 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.987 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.987 15:35:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.987 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.987 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.987 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.987 15:35:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.987 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.987 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.987 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.987 15:35:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.987 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.987 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.987 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.987 15:35:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.987 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.987 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.987 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.987 15:35:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.987 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.987 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.987 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.987 15:35:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.987 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.987 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.987 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.987 15:35:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.987 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.987 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.987 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.987 15:35:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.987 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.987 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.987 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.987 15:35:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.987 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.987 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.987 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.987 15:35:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.987 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.987 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.987 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.987 15:35:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.987 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.987 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.987 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.987 15:35:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.987 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.987 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.987 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.987 15:35:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.987 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.987 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.987 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.987 15:35:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.987 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.987 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.987 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.987 15:35:08 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:37.987 15:35:08 -- setup/common.sh@33 -- # echo 0 00:16:37.987 15:35:08 -- setup/common.sh@33 -- # return 0 00:16:37.987 15:35:08 -- setup/hugepages.sh@100 -- # resv=0 00:16:37.987 nr_hugepages=1024 00:16:37.987 15:35:08 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:16:37.987 resv_hugepages=0 00:16:37.987 15:35:08 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:16:37.987 surplus_hugepages=0 00:16:37.987 15:35:08 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:16:37.987 anon_hugepages=0 00:16:37.987 15:35:08 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:16:37.987 15:35:08 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:16:37.987 15:35:08 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:16:37.987 15:35:08 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:16:37.987 15:35:08 -- setup/common.sh@17 -- # local get=HugePages_Total 00:16:37.987 15:35:08 -- setup/common.sh@18 -- # local node= 00:16:37.987 15:35:08 -- setup/common.sh@19 -- # local var val 00:16:37.987 15:35:08 -- setup/common.sh@20 -- # local mem_f mem 00:16:37.987 15:35:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:37.987 15:35:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:37.987 15:35:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:37.987 15:35:08 -- setup/common.sh@28 -- # mapfile -t mem 00:16:37.987 15:35:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:37.987 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.987 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.987 15:35:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7547032 kB' 'MemAvailable: 9490360 kB' 'Buffers: 2436 kB' 'Cached: 2153300 kB' 'SwapCached: 0 kB' 'Active: 891844 kB' 'Inactive: 1386088 kB' 'Active(anon): 132660 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1386088 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1612 kB' 'Writeback: 0 kB' 'AnonPages: 123844 kB' 'Mapped: 48872 kB' 'Shmem: 10464 kB' 'KReclaimable: 69992 kB' 'Slab: 144640 kB' 'SReclaimable: 69992 kB' 'SUnreclaim: 74648 kB' 'KernelStack: 6592 kB' 'PageTables: 4364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 356016 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54964 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:16:37.987 15:35:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.987 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.987 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.987 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.987 15:35:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.987 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.987 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.987 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.987 15:35:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.987 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.987 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.987 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.987 15:35:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.987 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.987 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.987 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.987 15:35:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.987 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.987 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.987 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.987 15:35:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.987 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.987 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.987 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.987 15:35:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.987 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.987 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.987 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.987 15:35:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.987 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.987 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.987 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.987 15:35:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.987 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.987 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.987 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.988 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.988 15:35:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:37.988 15:35:08 -- setup/common.sh@33 -- # echo 1024 00:16:37.988 15:35:08 -- setup/common.sh@33 -- # return 0 00:16:37.988 15:35:08 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:16:37.988 15:35:08 -- setup/hugepages.sh@112 -- # get_nodes 00:16:37.989 15:35:08 -- setup/hugepages.sh@27 -- # local node 00:16:37.989 15:35:08 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:16:37.989 15:35:08 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:16:37.989 15:35:08 -- setup/hugepages.sh@32 -- # no_nodes=1 00:16:37.989 15:35:08 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:16:37.989 15:35:08 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:16:37.989 15:35:08 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:16:37.989 15:35:08 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:16:37.989 15:35:08 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:16:37.989 15:35:08 -- setup/common.sh@18 -- # local node=0 00:16:37.989 15:35:08 -- setup/common.sh@19 -- # local var val 00:16:37.989 15:35:08 -- setup/common.sh@20 -- # local mem_f mem 00:16:37.989 15:35:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:37.989 15:35:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:16:37.989 15:35:08 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:16:37.989 15:35:08 -- setup/common.sh@28 -- # mapfile -t mem 00:16:37.989 15:35:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:37.989 15:35:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7547032 kB' 'MemUsed: 4694948 kB' 'SwapCached: 0 kB' 'Active: 891940 kB' 'Inactive: 1386088 kB' 'Active(anon): 132756 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1386088 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1612 kB' 'Writeback: 0 kB' 'FilePages: 2155736 kB' 'Mapped: 48872 kB' 'AnonPages: 124196 kB' 'Shmem: 10464 kB' 'KernelStack: 6608 kB' 'PageTables: 4416 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 69992 kB' 'Slab: 144640 kB' 'SReclaimable: 69992 kB' 'SUnreclaim: 74648 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:16:37.989 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.989 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.989 15:35:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.989 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.989 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.989 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.989 15:35:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.989 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.989 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.989 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.989 15:35:08 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.989 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.989 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.989 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.989 15:35:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.989 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.989 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.989 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.989 15:35:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.989 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.989 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.989 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.989 15:35:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.989 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.989 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.989 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.989 15:35:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.989 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.989 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.989 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.989 15:35:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.989 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.989 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.989 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.989 15:35:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.989 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.989 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.989 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.989 15:35:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.989 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.989 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.989 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.989 15:35:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.989 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.989 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.989 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.989 15:35:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.989 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.989 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.989 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.989 15:35:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.989 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.989 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.989 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.989 15:35:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.989 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.989 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.989 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.989 15:35:08 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.989 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.989 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.989 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.989 15:35:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.989 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.989 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.989 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.989 15:35:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.989 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.989 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.989 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.989 15:35:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.989 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.989 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.989 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.989 15:35:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.989 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.989 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.989 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.989 15:35:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.989 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.989 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.989 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.989 15:35:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.989 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.989 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.989 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.989 15:35:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.989 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.989 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.989 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.989 15:35:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.989 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.989 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.989 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.989 15:35:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.989 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.989 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.989 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.989 15:35:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.989 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.989 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.989 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.989 15:35:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.990 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.990 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.990 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.990 15:35:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.990 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.990 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.990 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.990 15:35:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.990 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.990 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.990 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.990 15:35:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.990 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.990 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.990 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.990 15:35:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.990 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.990 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.990 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.990 15:35:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.990 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.990 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.990 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.990 15:35:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.990 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.990 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.990 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.990 15:35:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.990 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.990 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.990 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.990 15:35:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.990 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.990 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.990 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.990 15:35:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.990 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.990 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.990 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.990 15:35:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.990 15:35:08 -- setup/common.sh@32 -- # continue 00:16:37.990 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:37.990 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:37.990 15:35:08 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:37.990 15:35:08 -- setup/common.sh@33 -- # echo 0 00:16:37.990 15:35:08 -- setup/common.sh@33 -- # return 0 00:16:37.990 15:35:08 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:16:37.990 15:35:08 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:16:37.990 15:35:08 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:16:37.990 15:35:08 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:16:37.990 node0=1024 expecting 1024 00:16:37.990 15:35:08 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:16:37.990 15:35:08 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:16:37.990 15:35:08 -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:16:37.990 15:35:08 -- setup/hugepages.sh@202 -- # NRHUGE=512 00:16:37.990 15:35:08 -- setup/hugepages.sh@202 -- # setup output 00:16:37.990 15:35:08 -- setup/common.sh@9 -- # [[ output == output ]] 00:16:37.990 15:35:08 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:38.249 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:38.249 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:38.249 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:16:38.249 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:16:38.249 15:35:08 -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:16:38.249 15:35:08 -- setup/hugepages.sh@89 -- # local node 00:16:38.249 15:35:08 -- setup/hugepages.sh@90 -- # local sorted_t 00:16:38.249 15:35:08 -- setup/hugepages.sh@91 -- # local sorted_s 00:16:38.249 15:35:08 -- setup/hugepages.sh@92 -- # local surp 00:16:38.249 15:35:08 -- setup/hugepages.sh@93 -- # local resv 00:16:38.249 15:35:08 -- setup/hugepages.sh@94 -- # local anon 00:16:38.249 15:35:08 -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:16:38.249 15:35:08 -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:16:38.249 15:35:08 -- setup/common.sh@17 -- # local get=AnonHugePages 00:16:38.249 15:35:08 -- setup/common.sh@18 -- # local node= 00:16:38.249 15:35:08 -- setup/common.sh@19 -- # local var val 00:16:38.249 15:35:08 -- setup/common.sh@20 -- # local mem_f mem 00:16:38.249 15:35:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:38.249 15:35:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:38.249 15:35:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:38.249 15:35:08 -- setup/common.sh@28 -- # mapfile -t mem 00:16:38.249 15:35:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:38.249 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.249 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.249 15:35:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7548308 kB' 'MemAvailable: 9491648 kB' 'Buffers: 2436 kB' 'Cached: 2153316 kB' 'SwapCached: 0 kB' 'Active: 887948 kB' 'Inactive: 1386104 kB' 'Active(anon): 128764 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1386104 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1628 kB' 'Writeback: 0 kB' 'AnonPages: 120196 kB' 'Mapped: 48916 kB' 'Shmem: 10464 kB' 'KReclaimable: 69980 kB' 'Slab: 144596 kB' 'SReclaimable: 69980 kB' 'SUnreclaim: 74616 kB' 'KernelStack: 6532 kB' 'PageTables: 4088 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 342448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54868 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:16:38.249 15:35:08 -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:38.249 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.249 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.249 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.249 15:35:08 -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:38.249 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.249 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.249 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.249 15:35:08 -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:38.249 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.249 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.249 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.249 15:35:08 -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:38.249 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.249 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.249 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.249 15:35:08 -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:38.249 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.249 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.249 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.249 15:35:08 -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:38.249 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.249 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.249 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.249 15:35:08 -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:38.249 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.249 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.249 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.249 15:35:08 -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:38.249 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.249 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.249 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.249 15:35:08 -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:38.250 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.250 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.250 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.250 15:35:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:38.250 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.250 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.250 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.250 15:35:08 -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:38.250 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.250 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.250 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.250 15:35:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:38.250 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.250 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.250 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.250 15:35:08 -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:38.250 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.250 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.250 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.250 15:35:08 -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:38.250 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.250 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.250 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.250 15:35:08 -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:38.250 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.250 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.250 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.250 15:35:08 -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:38.250 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.250 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.250 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.250 15:35:08 -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:38.250 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.250 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.250 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.250 15:35:08 -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:38.250 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.250 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.250 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.250 15:35:08 -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:38.250 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.250 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.250 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.250 15:35:08 -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:38.250 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.250 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.250 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.250 15:35:08 -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:38.250 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.250 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.250 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.250 15:35:08 -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:38.250 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.250 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.250 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.250 15:35:08 -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:38.250 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.250 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.250 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.250 15:35:08 -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:38.250 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.250 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.250 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.250 15:35:08 -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:38.250 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.250 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.250 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.250 15:35:08 -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:38.250 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.250 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.250 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.250 15:35:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:38.512 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.512 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.512 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.512 15:35:08 -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:38.512 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.512 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.512 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.512 15:35:08 -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:38.512 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.512 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.512 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.512 15:35:08 -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:38.512 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.512 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.512 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.512 15:35:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:38.512 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.512 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.512 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.512 15:35:08 -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:38.512 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.512 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.512 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.512 15:35:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:38.512 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.512 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.512 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.512 15:35:08 -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:38.512 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.512 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.512 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.512 15:35:08 -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:38.512 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.512 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.512 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.512 15:35:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:38.512 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.512 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.512 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.512 15:35:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:38.512 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.512 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.512 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.512 15:35:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:38.512 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.512 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.512 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.512 15:35:08 -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:38.512 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.512 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.512 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.512 15:35:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:38.512 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.512 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.512 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.512 15:35:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:16:38.512 15:35:08 -- setup/common.sh@33 -- # echo 0 00:16:38.512 15:35:08 -- setup/common.sh@33 -- # return 0 00:16:38.512 15:35:08 -- setup/hugepages.sh@97 -- # anon=0 00:16:38.512 15:35:08 -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:16:38.512 15:35:08 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:16:38.512 15:35:08 -- setup/common.sh@18 -- # local node= 00:16:38.512 15:35:08 -- setup/common.sh@19 -- # local var val 00:16:38.512 15:35:08 -- setup/common.sh@20 -- # local mem_f mem 00:16:38.512 15:35:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:38.512 15:35:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:38.512 15:35:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:38.512 15:35:08 -- setup/common.sh@28 -- # mapfile -t mem 00:16:38.512 15:35:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:38.512 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.512 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.512 15:35:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7548056 kB' 'MemAvailable: 9491396 kB' 'Buffers: 2436 kB' 'Cached: 2153316 kB' 'SwapCached: 0 kB' 'Active: 887308 kB' 'Inactive: 1386104 kB' 'Active(anon): 128124 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1386104 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1628 kB' 'Writeback: 0 kB' 'AnonPages: 119332 kB' 'Mapped: 48252 kB' 'Shmem: 10464 kB' 'KReclaimable: 69980 kB' 'Slab: 144564 kB' 'SReclaimable: 69980 kB' 'SUnreclaim: 74584 kB' 'KernelStack: 6496 kB' 'PageTables: 3912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 339412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54836 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:16:38.512 15:35:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.512 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.512 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.512 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.512 15:35:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.512 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.512 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.512 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.512 15:35:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.512 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.512 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.512 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.512 15:35:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.512 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.512 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.512 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.512 15:35:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.512 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.512 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.512 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.512 15:35:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.512 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.512 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.512 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.512 15:35:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.512 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.512 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.512 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.512 15:35:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.512 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.512 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.512 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.512 15:35:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.512 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.512 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.512 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.512 15:35:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.512 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.513 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.513 15:35:08 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.514 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.514 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.514 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.514 15:35:08 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.514 15:35:08 -- setup/common.sh@33 -- # echo 0 00:16:38.514 15:35:08 -- setup/common.sh@33 -- # return 0 00:16:38.514 15:35:08 -- setup/hugepages.sh@99 -- # surp=0 00:16:38.514 15:35:08 -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:16:38.514 15:35:08 -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:16:38.514 15:35:08 -- setup/common.sh@18 -- # local node= 00:16:38.514 15:35:08 -- setup/common.sh@19 -- # local var val 00:16:38.514 15:35:08 -- setup/common.sh@20 -- # local mem_f mem 00:16:38.514 15:35:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:38.514 15:35:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:38.514 15:35:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:38.514 15:35:08 -- setup/common.sh@28 -- # mapfile -t mem 00:16:38.514 15:35:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:38.514 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.514 15:35:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7555868 kB' 'MemAvailable: 9499208 kB' 'Buffers: 2436 kB' 'Cached: 2153316 kB' 'SwapCached: 0 kB' 'Active: 887216 kB' 'Inactive: 1386104 kB' 'Active(anon): 128032 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1386104 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1628 kB' 'Writeback: 0 kB' 'AnonPages: 119456 kB' 'Mapped: 48132 kB' 'Shmem: 10464 kB' 'KReclaimable: 69980 kB' 'Slab: 144544 kB' 'SReclaimable: 69980 kB' 'SUnreclaim: 74564 kB' 'KernelStack: 6512 kB' 'PageTables: 3900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 339412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:16:38.514 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.514 15:35:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:38.514 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.514 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.514 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.514 15:35:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:38.514 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.514 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.514 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.514 15:35:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:38.514 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.514 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.514 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.514 15:35:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:38.514 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.514 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.514 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.514 15:35:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:38.514 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.514 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.514 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.514 15:35:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:38.514 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.514 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.514 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.514 15:35:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:38.514 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.514 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.514 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.514 15:35:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:38.514 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.514 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.514 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.514 15:35:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:38.514 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.514 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.514 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.514 15:35:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:38.514 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.514 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.514 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.514 15:35:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:38.514 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.514 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.514 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.514 15:35:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:38.514 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.514 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.514 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.514 15:35:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:38.514 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.514 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.514 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.514 15:35:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:38.514 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.514 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.514 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.514 15:35:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:38.514 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.514 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.514 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.514 15:35:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:38.514 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.514 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.514 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.514 15:35:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:38.514 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.514 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.514 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.514 15:35:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:38.514 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.514 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.514 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.514 15:35:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:38.514 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.514 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.514 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.514 15:35:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:38.514 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.514 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.514 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.514 15:35:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:38.514 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.514 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.514 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.514 15:35:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:38.514 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.514 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.514 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.514 15:35:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:38.514 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.514 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.514 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.514 15:35:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:38.514 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.514 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.514 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.514 15:35:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:38.514 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.514 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.514 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.514 15:35:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:38.514 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.514 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.514 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.514 15:35:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:38.514 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.514 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.514 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.514 15:35:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:38.514 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.514 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.514 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.514 15:35:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:38.514 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.514 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.514 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.514 15:35:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:38.514 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.514 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.514 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.514 15:35:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:38.514 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.514 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.514 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.514 15:35:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:38.514 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.514 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.514 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.515 15:35:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:38.515 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.515 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.515 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.515 15:35:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:38.515 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.515 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.515 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.515 15:35:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:38.515 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.515 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.515 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.515 15:35:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:38.515 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.515 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.515 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.515 15:35:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:38.515 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.515 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.515 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.515 15:35:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:38.515 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.515 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.515 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.515 15:35:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:38.515 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.515 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.515 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.515 15:35:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:38.515 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.515 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.515 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.515 15:35:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:38.515 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.515 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.515 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.515 15:35:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:38.515 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.515 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.515 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.515 15:35:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:38.515 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.515 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.515 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.515 15:35:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:38.515 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.515 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.515 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.515 15:35:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:38.515 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.515 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.515 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.515 15:35:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:38.515 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.515 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.515 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.515 15:35:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:38.515 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.515 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.515 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.515 15:35:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:38.515 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.515 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.515 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.515 15:35:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:38.515 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.515 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.515 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.515 15:35:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:38.515 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.515 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.515 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.515 15:35:08 -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:16:38.515 15:35:08 -- setup/common.sh@33 -- # echo 0 00:16:38.515 15:35:08 -- setup/common.sh@33 -- # return 0 00:16:38.515 15:35:08 -- setup/hugepages.sh@100 -- # resv=0 00:16:38.515 15:35:08 -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:16:38.515 nr_hugepages=1024 00:16:38.515 resv_hugepages=0 00:16:38.515 15:35:08 -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:16:38.515 15:35:08 -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:16:38.515 surplus_hugepages=0 00:16:38.515 anon_hugepages=0 00:16:38.515 15:35:08 -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:16:38.515 15:35:08 -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:16:38.515 15:35:08 -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:16:38.515 15:35:08 -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:16:38.515 15:35:08 -- setup/common.sh@17 -- # local get=HugePages_Total 00:16:38.515 15:35:08 -- setup/common.sh@18 -- # local node= 00:16:38.515 15:35:08 -- setup/common.sh@19 -- # local var val 00:16:38.515 15:35:08 -- setup/common.sh@20 -- # local mem_f mem 00:16:38.515 15:35:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:38.515 15:35:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:16:38.515 15:35:08 -- setup/common.sh@25 -- # [[ -n '' ]] 00:16:38.515 15:35:08 -- setup/common.sh@28 -- # mapfile -t mem 00:16:38.515 15:35:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:38.515 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.515 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.515 15:35:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7555868 kB' 'MemAvailable: 9499208 kB' 'Buffers: 2436 kB' 'Cached: 2153316 kB' 'SwapCached: 0 kB' 'Active: 887288 kB' 'Inactive: 1386104 kB' 'Active(anon): 128104 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1386104 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 1628 kB' 'Writeback: 0 kB' 'AnonPages: 119516 kB' 'Mapped: 48132 kB' 'Shmem: 10464 kB' 'KReclaimable: 69980 kB' 'Slab: 144540 kB' 'SReclaimable: 69980 kB' 'SUnreclaim: 74560 kB' 'KernelStack: 6496 kB' 'PageTables: 3848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 13461016 kB' 'Committed_AS: 339412 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 54852 kB' 'VmallocChunk: 0 kB' 'Percpu: 6240 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 188268 kB' 'DirectMap2M: 5054464 kB' 'DirectMap1G: 9437184 kB' 00:16:38.515 15:35:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:38.515 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.515 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.515 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.515 15:35:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:38.515 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.515 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.515 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.515 15:35:08 -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:38.515 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.515 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.515 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.515 15:35:08 -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:38.515 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.515 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.515 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.515 15:35:08 -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:38.515 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.515 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.515 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.515 15:35:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:38.515 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.515 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.515 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.515 15:35:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:38.515 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.515 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.515 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.515 15:35:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:38.515 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.515 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.515 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.515 15:35:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:38.515 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.515 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.515 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.515 15:35:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:38.515 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.515 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.515 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.515 15:35:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:38.515 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.515 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.515 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.515 15:35:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:38.515 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.515 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.515 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.516 15:35:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:38.516 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.516 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.516 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.516 15:35:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:38.516 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.516 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.516 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.516 15:35:08 -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:38.516 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.516 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.516 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.516 15:35:08 -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:38.516 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.516 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.516 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.516 15:35:08 -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:38.516 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.516 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.516 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.516 15:35:08 -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:38.516 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.516 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.516 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.516 15:35:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:38.516 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.516 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.516 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.516 15:35:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:38.516 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.516 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.516 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.516 15:35:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:38.516 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.516 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.516 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.516 15:35:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:38.516 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.516 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.516 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.516 15:35:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:38.516 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.516 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.516 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.516 15:35:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:38.516 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.516 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.516 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.516 15:35:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:38.516 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.516 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.516 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.516 15:35:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:38.516 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.516 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.516 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.516 15:35:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:38.516 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.516 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.516 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.516 15:35:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:38.516 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.516 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.516 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.516 15:35:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:38.516 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.516 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.516 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.516 15:35:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:38.516 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.516 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.516 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.516 15:35:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:38.516 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.516 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.516 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.516 15:35:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:38.516 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.516 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.516 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.516 15:35:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:38.516 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.516 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.516 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.516 15:35:08 -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:38.516 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.516 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.516 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.516 15:35:08 -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:38.516 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.516 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.516 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.516 15:35:08 -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:38.516 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.516 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.516 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.516 15:35:08 -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:38.516 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.516 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.516 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.516 15:35:08 -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:38.516 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.516 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.516 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.516 15:35:08 -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:38.516 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.516 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.516 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.516 15:35:08 -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:38.516 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.516 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.516 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.516 15:35:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:38.516 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.516 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.516 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.516 15:35:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:38.516 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.516 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.516 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.516 15:35:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:38.516 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.516 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.516 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.516 15:35:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:38.516 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.516 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.516 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.516 15:35:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:38.516 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.516 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.516 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.516 15:35:08 -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:38.516 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.517 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.517 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.517 15:35:08 -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:38.517 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.517 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.517 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.517 15:35:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:38.517 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.517 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.517 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.517 15:35:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:16:38.517 15:35:08 -- setup/common.sh@33 -- # echo 1024 00:16:38.517 15:35:08 -- setup/common.sh@33 -- # return 0 00:16:38.517 15:35:08 -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:16:38.517 15:35:08 -- setup/hugepages.sh@112 -- # get_nodes 00:16:38.517 15:35:08 -- setup/hugepages.sh@27 -- # local node 00:16:38.517 15:35:08 -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:16:38.517 15:35:08 -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:16:38.517 15:35:08 -- setup/hugepages.sh@32 -- # no_nodes=1 00:16:38.517 15:35:08 -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:16:38.517 15:35:08 -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:16:38.517 15:35:08 -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:16:38.517 15:35:08 -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:16:38.517 15:35:08 -- setup/common.sh@17 -- # local get=HugePages_Surp 00:16:38.517 15:35:08 -- setup/common.sh@18 -- # local node=0 00:16:38.517 15:35:08 -- setup/common.sh@19 -- # local var val 00:16:38.517 15:35:08 -- setup/common.sh@20 -- # local mem_f mem 00:16:38.517 15:35:08 -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:16:38.517 15:35:08 -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:16:38.517 15:35:08 -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:16:38.517 15:35:08 -- setup/common.sh@28 -- # mapfile -t mem 00:16:38.517 15:35:08 -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:16:38.517 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.517 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.517 15:35:08 -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 12241980 kB' 'MemFree: 7555868 kB' 'MemUsed: 4686112 kB' 'SwapCached: 0 kB' 'Active: 887164 kB' 'Inactive: 1386104 kB' 'Active(anon): 127980 kB' 'Inactive(anon): 0 kB' 'Active(file): 759184 kB' 'Inactive(file): 1386104 kB' 'Unevictable: 1536 kB' 'Mlocked: 0 kB' 'Dirty: 1628 kB' 'Writeback: 0 kB' 'FilePages: 2155752 kB' 'Mapped: 48132 kB' 'AnonPages: 119424 kB' 'Shmem: 10464 kB' 'KernelStack: 6496 kB' 'PageTables: 3848 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 69980 kB' 'Slab: 144532 kB' 'SReclaimable: 69980 kB' 'SUnreclaim: 74552 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:16:38.517 15:35:08 -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.517 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.517 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.517 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.517 15:35:08 -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.517 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.517 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.517 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.517 15:35:08 -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.517 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.517 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.517 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.517 15:35:08 -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.517 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.517 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.517 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.517 15:35:08 -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.517 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.517 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.517 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.517 15:35:08 -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.517 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.517 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.517 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.517 15:35:08 -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.517 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.517 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.517 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.517 15:35:08 -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.517 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.517 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.517 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.517 15:35:08 -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.517 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.517 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.517 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.517 15:35:08 -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.517 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.517 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.517 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.517 15:35:08 -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.517 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.517 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.517 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.517 15:35:08 -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.517 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.517 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.517 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.517 15:35:08 -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.517 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.517 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.517 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.517 15:35:08 -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.517 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.517 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.517 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.517 15:35:08 -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.517 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.517 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.517 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.517 15:35:08 -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.517 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.517 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.517 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.517 15:35:08 -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.517 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.517 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.517 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.517 15:35:08 -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.517 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.517 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.517 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.517 15:35:08 -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.517 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.517 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.517 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.517 15:35:08 -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.517 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.517 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.517 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.517 15:35:08 -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.517 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.517 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.517 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.517 15:35:08 -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.517 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.517 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.517 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.517 15:35:08 -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.517 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.517 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.517 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.517 15:35:08 -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.517 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.517 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.517 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.517 15:35:08 -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.517 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.517 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.517 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.517 15:35:08 -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.517 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.517 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.517 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.517 15:35:08 -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.517 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.517 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.517 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.517 15:35:08 -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.517 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.517 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.517 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.517 15:35:08 -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.518 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.518 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.518 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.518 15:35:08 -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.518 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.518 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.518 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.518 15:35:08 -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.518 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.518 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.518 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.518 15:35:08 -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.518 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.518 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.518 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.518 15:35:08 -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.518 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.518 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.518 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.518 15:35:08 -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.518 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.518 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.518 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.518 15:35:08 -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.518 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.518 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.518 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.518 15:35:08 -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.518 15:35:08 -- setup/common.sh@32 -- # continue 00:16:38.518 15:35:08 -- setup/common.sh@31 -- # IFS=': ' 00:16:38.518 15:35:08 -- setup/common.sh@31 -- # read -r var val _ 00:16:38.518 15:35:08 -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:16:38.518 15:35:08 -- setup/common.sh@33 -- # echo 0 00:16:38.518 15:35:08 -- setup/common.sh@33 -- # return 0 00:16:38.518 15:35:08 -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:16:38.518 15:35:08 -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:16:38.518 15:35:08 -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:16:38.518 15:35:08 -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:16:38.518 node0=1024 expecting 1024 00:16:38.518 15:35:08 -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:16:38.518 15:35:08 -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:16:38.518 00:16:38.518 real 0m1.059s 00:16:38.518 user 0m0.570s 00:16:38.518 sys 0m0.554s 00:16:38.518 15:35:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:38.518 15:35:08 -- common/autotest_common.sh@10 -- # set +x 00:16:38.518 ************************************ 00:16:38.518 END TEST no_shrink_alloc 00:16:38.518 ************************************ 00:16:38.518 15:35:08 -- setup/hugepages.sh@217 -- # clear_hp 00:16:38.518 15:35:08 -- setup/hugepages.sh@37 -- # local node hp 00:16:38.518 15:35:08 -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:16:38.518 15:35:08 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:16:38.518 15:35:08 -- setup/hugepages.sh@41 -- # echo 0 00:16:38.518 15:35:08 -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:16:38.518 15:35:08 -- setup/hugepages.sh@41 -- # echo 0 00:16:38.518 15:35:08 -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:16:38.518 15:35:08 -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:16:38.518 00:16:38.518 real 0m5.155s 00:16:38.518 user 0m2.403s 00:16:38.518 sys 0m2.697s 00:16:38.518 15:35:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:38.518 15:35:08 -- common/autotest_common.sh@10 -- # set +x 00:16:38.518 ************************************ 00:16:38.518 END TEST hugepages 00:16:38.518 ************************************ 00:16:38.518 15:35:08 -- setup/test-setup.sh@14 -- # run_test driver /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:16:38.518 15:35:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:38.518 15:35:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:38.518 15:35:08 -- common/autotest_common.sh@10 -- # set +x 00:16:38.776 ************************************ 00:16:38.776 START TEST driver 00:16:38.776 ************************************ 00:16:38.776 15:35:08 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/driver.sh 00:16:38.776 * Looking for test storage... 00:16:38.776 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:16:38.776 15:35:08 -- setup/driver.sh@68 -- # setup reset 00:16:38.776 15:35:08 -- setup/common.sh@9 -- # [[ reset == output ]] 00:16:38.776 15:35:08 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:39.343 15:35:09 -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:16:39.343 15:35:09 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:39.343 15:35:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:39.343 15:35:09 -- common/autotest_common.sh@10 -- # set +x 00:16:39.343 ************************************ 00:16:39.343 START TEST guess_driver 00:16:39.343 ************************************ 00:16:39.343 15:35:09 -- common/autotest_common.sh@1111 -- # guess_driver 00:16:39.343 15:35:09 -- setup/driver.sh@46 -- # local driver setup_driver marker 00:16:39.343 15:35:09 -- setup/driver.sh@47 -- # local fail=0 00:16:39.343 15:35:09 -- setup/driver.sh@49 -- # pick_driver 00:16:39.343 15:35:09 -- setup/driver.sh@36 -- # vfio 00:16:39.343 15:35:09 -- setup/driver.sh@21 -- # local iommu_grups 00:16:39.343 15:35:09 -- setup/driver.sh@22 -- # local unsafe_vfio 00:16:39.343 15:35:09 -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:16:39.343 15:35:09 -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:16:39.343 15:35:09 -- setup/driver.sh@29 -- # (( 0 > 0 )) 00:16:39.343 15:35:09 -- setup/driver.sh@29 -- # [[ '' == Y ]] 00:16:39.343 15:35:09 -- setup/driver.sh@32 -- # return 1 00:16:39.343 15:35:09 -- setup/driver.sh@38 -- # uio 00:16:39.343 15:35:09 -- setup/driver.sh@17 -- # is_driver uio_pci_generic 00:16:39.343 15:35:09 -- setup/driver.sh@14 -- # mod uio_pci_generic 00:16:39.343 15:35:09 -- setup/driver.sh@12 -- # dep uio_pci_generic 00:16:39.343 15:35:09 -- setup/driver.sh@11 -- # modprobe --show-depends uio_pci_generic 00:16:39.343 15:35:09 -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio.ko.xz 00:16:39.343 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/uio/uio_pci_generic.ko.xz == *\.\k\o* ]] 00:16:39.343 15:35:09 -- setup/driver.sh@39 -- # echo uio_pci_generic 00:16:39.343 15:35:09 -- setup/driver.sh@49 -- # driver=uio_pci_generic 00:16:39.343 15:35:09 -- setup/driver.sh@51 -- # [[ uio_pci_generic == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:16:39.343 Looking for driver=uio_pci_generic 00:16:39.343 15:35:09 -- setup/driver.sh@56 -- # echo 'Looking for driver=uio_pci_generic' 00:16:39.343 15:35:09 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:16:39.343 15:35:09 -- setup/driver.sh@45 -- # setup output config 00:16:39.343 15:35:09 -- setup/common.sh@9 -- # [[ output == output ]] 00:16:39.343 15:35:09 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:16:40.277 15:35:10 -- setup/driver.sh@58 -- # [[ devices: == \-\> ]] 00:16:40.277 15:35:10 -- setup/driver.sh@58 -- # continue 00:16:40.277 15:35:10 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:16:40.277 15:35:10 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:16:40.277 15:35:10 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:16:40.278 15:35:10 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:16:40.278 15:35:10 -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:16:40.278 15:35:10 -- setup/driver.sh@61 -- # [[ uio_pci_generic == uio_pci_generic ]] 00:16:40.278 15:35:10 -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:16:40.278 15:35:10 -- setup/driver.sh@64 -- # (( fail == 0 )) 00:16:40.278 15:35:10 -- setup/driver.sh@65 -- # setup reset 00:16:40.278 15:35:10 -- setup/common.sh@9 -- # [[ reset == output ]] 00:16:40.278 15:35:10 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:40.844 00:16:40.844 real 0m1.465s 00:16:40.844 user 0m0.573s 00:16:40.844 sys 0m0.888s 00:16:40.844 15:35:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:40.844 15:35:11 -- common/autotest_common.sh@10 -- # set +x 00:16:40.844 ************************************ 00:16:40.844 END TEST guess_driver 00:16:40.844 ************************************ 00:16:40.844 00:16:40.844 real 0m2.244s 00:16:40.844 user 0m0.854s 00:16:40.844 sys 0m1.418s 00:16:40.844 15:35:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:40.844 15:35:11 -- common/autotest_common.sh@10 -- # set +x 00:16:40.844 ************************************ 00:16:40.844 END TEST driver 00:16:40.844 ************************************ 00:16:40.844 15:35:11 -- setup/test-setup.sh@15 -- # run_test devices /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:16:40.844 15:35:11 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:40.844 15:35:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:40.844 15:35:11 -- common/autotest_common.sh@10 -- # set +x 00:16:41.102 ************************************ 00:16:41.102 START TEST devices 00:16:41.102 ************************************ 00:16:41.102 15:35:11 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/setup/devices.sh 00:16:41.102 * Looking for test storage... 00:16:41.102 * Found test storage at /home/vagrant/spdk_repo/spdk/test/setup 00:16:41.102 15:35:11 -- setup/devices.sh@190 -- # trap cleanup EXIT 00:16:41.102 15:35:11 -- setup/devices.sh@192 -- # setup reset 00:16:41.102 15:35:11 -- setup/common.sh@9 -- # [[ reset == output ]] 00:16:41.102 15:35:11 -- setup/common.sh@12 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:42.038 15:35:12 -- setup/devices.sh@194 -- # get_zoned_devs 00:16:42.038 15:35:12 -- common/autotest_common.sh@1655 -- # zoned_devs=() 00:16:42.038 15:35:12 -- common/autotest_common.sh@1655 -- # local -gA zoned_devs 00:16:42.038 15:35:12 -- common/autotest_common.sh@1656 -- # local nvme bdf 00:16:42.038 15:35:12 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:16:42.038 15:35:12 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n1 00:16:42.038 15:35:12 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:16:42.038 15:35:12 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:16:42.038 15:35:12 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:42.038 15:35:12 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:16:42.038 15:35:12 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n2 00:16:42.038 15:35:12 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:16:42.038 15:35:12 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:16:42.038 15:35:12 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:42.038 15:35:12 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:16:42.038 15:35:12 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme0n3 00:16:42.038 15:35:12 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:16:42.038 15:35:12 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:16:42.038 15:35:12 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:42.038 15:35:12 -- common/autotest_common.sh@1658 -- # for nvme in /sys/block/nvme* 00:16:42.038 15:35:12 -- common/autotest_common.sh@1659 -- # is_block_zoned nvme1n1 00:16:42.038 15:35:12 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:16:42.038 15:35:12 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:16:42.038 15:35:12 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:16:42.038 15:35:12 -- setup/devices.sh@196 -- # blocks=() 00:16:42.038 15:35:12 -- setup/devices.sh@196 -- # declare -a blocks 00:16:42.038 15:35:12 -- setup/devices.sh@197 -- # blocks_to_pci=() 00:16:42.038 15:35:12 -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:16:42.038 15:35:12 -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:16:42.038 15:35:12 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:16:42.038 15:35:12 -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:16:42.038 15:35:12 -- setup/devices.sh@201 -- # ctrl=nvme0 00:16:42.038 15:35:12 -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:16:42.038 15:35:12 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:16:42.038 15:35:12 -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:16:42.038 15:35:12 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:16:42.038 15:35:12 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:16:42.038 No valid GPT data, bailing 00:16:42.038 15:35:12 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:16:42.038 15:35:12 -- scripts/common.sh@391 -- # pt= 00:16:42.038 15:35:12 -- scripts/common.sh@392 -- # return 1 00:16:42.038 15:35:12 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:16:42.038 15:35:12 -- setup/common.sh@76 -- # local dev=nvme0n1 00:16:42.038 15:35:12 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:16:42.038 15:35:12 -- setup/common.sh@80 -- # echo 4294967296 00:16:42.038 15:35:12 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:16:42.038 15:35:12 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:16:42.038 15:35:12 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:16:42.038 15:35:12 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:16:42.038 15:35:12 -- setup/devices.sh@201 -- # ctrl=nvme0n2 00:16:42.038 15:35:12 -- setup/devices.sh@201 -- # ctrl=nvme0 00:16:42.038 15:35:12 -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:16:42.038 15:35:12 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:16:42.038 15:35:12 -- setup/devices.sh@204 -- # block_in_use nvme0n2 00:16:42.038 15:35:12 -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:16:42.038 15:35:12 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:16:42.038 No valid GPT data, bailing 00:16:42.038 15:35:12 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:16:42.038 15:35:12 -- scripts/common.sh@391 -- # pt= 00:16:42.038 15:35:12 -- scripts/common.sh@392 -- # return 1 00:16:42.038 15:35:12 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n2 00:16:42.038 15:35:12 -- setup/common.sh@76 -- # local dev=nvme0n2 00:16:42.038 15:35:12 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n2 ]] 00:16:42.038 15:35:12 -- setup/common.sh@80 -- # echo 4294967296 00:16:42.038 15:35:12 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:16:42.038 15:35:12 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:16:42.038 15:35:12 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:16:42.038 15:35:12 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:16:42.038 15:35:12 -- setup/devices.sh@201 -- # ctrl=nvme0n3 00:16:42.038 15:35:12 -- setup/devices.sh@201 -- # ctrl=nvme0 00:16:42.038 15:35:12 -- setup/devices.sh@202 -- # pci=0000:00:11.0 00:16:42.038 15:35:12 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\1\.\0* ]] 00:16:42.038 15:35:12 -- setup/devices.sh@204 -- # block_in_use nvme0n3 00:16:42.038 15:35:12 -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:16:42.038 15:35:12 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:16:42.038 No valid GPT data, bailing 00:16:42.038 15:35:12 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:16:42.038 15:35:12 -- scripts/common.sh@391 -- # pt= 00:16:42.038 15:35:12 -- scripts/common.sh@392 -- # return 1 00:16:42.038 15:35:12 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n3 00:16:42.038 15:35:12 -- setup/common.sh@76 -- # local dev=nvme0n3 00:16:42.038 15:35:12 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n3 ]] 00:16:42.038 15:35:12 -- setup/common.sh@80 -- # echo 4294967296 00:16:42.038 15:35:12 -- setup/devices.sh@204 -- # (( 4294967296 >= min_disk_size )) 00:16:42.038 15:35:12 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:16:42.038 15:35:12 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:11.0 00:16:42.038 15:35:12 -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:16:42.038 15:35:12 -- setup/devices.sh@201 -- # ctrl=nvme1n1 00:16:42.038 15:35:12 -- setup/devices.sh@201 -- # ctrl=nvme1 00:16:42.038 15:35:12 -- setup/devices.sh@202 -- # pci=0000:00:10.0 00:16:42.038 15:35:12 -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\0\0\:\1\0\.\0* ]] 00:16:42.038 15:35:12 -- setup/devices.sh@204 -- # block_in_use nvme1n1 00:16:42.038 15:35:12 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:16:42.038 15:35:12 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:16:42.296 No valid GPT data, bailing 00:16:42.296 15:35:12 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:16:42.296 15:35:12 -- scripts/common.sh@391 -- # pt= 00:16:42.296 15:35:12 -- scripts/common.sh@392 -- # return 1 00:16:42.296 15:35:12 -- setup/devices.sh@204 -- # sec_size_to_bytes nvme1n1 00:16:42.296 15:35:12 -- setup/common.sh@76 -- # local dev=nvme1n1 00:16:42.296 15:35:12 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme1n1 ]] 00:16:42.296 15:35:12 -- setup/common.sh@80 -- # echo 5368709120 00:16:42.296 15:35:12 -- setup/devices.sh@204 -- # (( 5368709120 >= min_disk_size )) 00:16:42.296 15:35:12 -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:16:42.296 15:35:12 -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:00:10.0 00:16:42.296 15:35:12 -- setup/devices.sh@209 -- # (( 4 > 0 )) 00:16:42.296 15:35:12 -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:16:42.296 15:35:12 -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:16:42.296 15:35:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:42.296 15:35:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:42.296 15:35:12 -- common/autotest_common.sh@10 -- # set +x 00:16:42.296 ************************************ 00:16:42.296 START TEST nvme_mount 00:16:42.296 ************************************ 00:16:42.296 15:35:12 -- common/autotest_common.sh@1111 -- # nvme_mount 00:16:42.296 15:35:12 -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:16:42.296 15:35:12 -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:16:42.296 15:35:12 -- setup/devices.sh@97 -- # nvme_mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:16:42.296 15:35:12 -- setup/devices.sh@98 -- # nvme_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:16:42.296 15:35:12 -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:16:42.296 15:35:12 -- setup/common.sh@39 -- # local disk=nvme0n1 00:16:42.296 15:35:12 -- setup/common.sh@40 -- # local part_no=1 00:16:42.296 15:35:12 -- setup/common.sh@41 -- # local size=1073741824 00:16:42.296 15:35:12 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:16:42.296 15:35:12 -- setup/common.sh@44 -- # parts=() 00:16:42.296 15:35:12 -- setup/common.sh@44 -- # local parts 00:16:42.296 15:35:12 -- setup/common.sh@46 -- # (( part = 1 )) 00:16:42.296 15:35:12 -- setup/common.sh@46 -- # (( part <= part_no )) 00:16:42.296 15:35:12 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:16:42.296 15:35:12 -- setup/common.sh@46 -- # (( part++ )) 00:16:42.296 15:35:12 -- setup/common.sh@46 -- # (( part <= part_no )) 00:16:42.296 15:35:12 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:16:42.296 15:35:12 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:16:42.296 15:35:12 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:16:43.230 Creating new GPT entries in memory. 00:16:43.230 GPT data structures destroyed! You may now partition the disk using fdisk or 00:16:43.230 other utilities. 00:16:43.230 15:35:13 -- setup/common.sh@57 -- # (( part = 1 )) 00:16:43.230 15:35:13 -- setup/common.sh@57 -- # (( part <= part_no )) 00:16:43.230 15:35:13 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:16:43.230 15:35:13 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:16:43.230 15:35:13 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:16:44.605 Creating new GPT entries in memory. 00:16:44.605 The operation has completed successfully. 00:16:44.605 15:35:14 -- setup/common.sh@57 -- # (( part++ )) 00:16:44.605 15:35:14 -- setup/common.sh@57 -- # (( part <= part_no )) 00:16:44.605 15:35:14 -- setup/common.sh@62 -- # wait 58273 00:16:44.605 15:35:14 -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:16:44.605 15:35:14 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size= 00:16:44.605 15:35:14 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:16:44.605 15:35:14 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:16:44.605 15:35:14 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:16:44.605 15:35:14 -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:16:44.605 15:35:14 -- setup/devices.sh@105 -- # verify 0000:00:11.0 nvme0n1:nvme0n1p1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:16:44.605 15:35:14 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:16:44.605 15:35:14 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:16:44.605 15:35:14 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:16:44.605 15:35:14 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:16:44.605 15:35:14 -- setup/devices.sh@53 -- # local found=0 00:16:44.605 15:35:14 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:16:44.605 15:35:14 -- setup/devices.sh@56 -- # : 00:16:44.605 15:35:14 -- setup/devices.sh@59 -- # local pci status 00:16:44.605 15:35:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:44.605 15:35:14 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:16:44.605 15:35:14 -- setup/devices.sh@47 -- # setup output config 00:16:44.605 15:35:14 -- setup/common.sh@9 -- # [[ output == output ]] 00:16:44.605 15:35:14 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:16:44.605 15:35:14 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:16:44.605 15:35:14 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:16:44.605 15:35:14 -- setup/devices.sh@63 -- # found=1 00:16:44.605 15:35:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:44.605 15:35:14 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:16:44.605 15:35:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:44.605 15:35:14 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:16:44.605 15:35:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:44.864 15:35:14 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:16:44.864 15:35:14 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:44.864 15:35:15 -- setup/devices.sh@66 -- # (( found == 1 )) 00:16:44.864 15:35:15 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:16:44.864 15:35:15 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:16:44.864 15:35:15 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:16:44.864 15:35:15 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:16:44.864 15:35:15 -- setup/devices.sh@110 -- # cleanup_nvme 00:16:44.864 15:35:15 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:16:44.864 15:35:15 -- setup/devices.sh@21 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:16:44.864 15:35:15 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:16:44.864 15:35:15 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:16:44.864 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:16:44.864 15:35:15 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:16:44.864 15:35:15 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:16:45.123 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:16:45.123 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:16:45.123 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:16:45.123 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:16:45.123 15:35:15 -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 1024M 00:16:45.123 15:35:15 -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount size=1024M 00:16:45.123 15:35:15 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:16:45.123 15:35:15 -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:16:45.123 15:35:15 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:16:45.123 15:35:15 -- setup/common.sh@72 -- # mount /dev/nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:16:45.123 15:35:15 -- setup/devices.sh@116 -- # verify 0000:00:11.0 nvme0n1:nvme0n1 /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:16:45.123 15:35:15 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:16:45.123 15:35:15 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:16:45.123 15:35:15 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:16:45.123 15:35:15 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:16:45.382 15:35:15 -- setup/devices.sh@53 -- # local found=0 00:16:45.382 15:35:15 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:16:45.382 15:35:15 -- setup/devices.sh@56 -- # : 00:16:45.382 15:35:15 -- setup/devices.sh@59 -- # local pci status 00:16:45.382 15:35:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:45.382 15:35:15 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:16:45.382 15:35:15 -- setup/devices.sh@47 -- # setup output config 00:16:45.382 15:35:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:16:45.382 15:35:15 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:16:45.382 15:35:15 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:16:45.382 15:35:15 -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:16:45.382 15:35:15 -- setup/devices.sh@63 -- # found=1 00:16:45.382 15:35:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:45.382 15:35:15 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:16:45.382 15:35:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:45.673 15:35:15 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:16:45.673 15:35:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:45.673 15:35:15 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:16:45.673 15:35:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:45.673 15:35:15 -- setup/devices.sh@66 -- # (( found == 1 )) 00:16:45.673 15:35:15 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount ]] 00:16:45.673 15:35:15 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:16:45.673 15:35:15 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme ]] 00:16:45.673 15:35:15 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount/test_nvme 00:16:45.673 15:35:15 -- setup/devices.sh@123 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:16:45.673 15:35:15 -- setup/devices.sh@125 -- # verify 0000:00:11.0 data@nvme0n1 '' '' 00:16:45.673 15:35:15 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:16:45.673 15:35:15 -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:16:45.673 15:35:15 -- setup/devices.sh@50 -- # local mount_point= 00:16:45.673 15:35:15 -- setup/devices.sh@51 -- # local test_file= 00:16:45.673 15:35:15 -- setup/devices.sh@53 -- # local found=0 00:16:45.673 15:35:15 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:16:45.673 15:35:15 -- setup/devices.sh@59 -- # local pci status 00:16:45.673 15:35:15 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:45.673 15:35:15 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:16:45.673 15:35:15 -- setup/devices.sh@47 -- # setup output config 00:16:45.673 15:35:15 -- setup/common.sh@9 -- # [[ output == output ]] 00:16:45.673 15:35:15 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:16:45.930 15:35:16 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:16:45.930 15:35:16 -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:16:45.930 15:35:16 -- setup/devices.sh@63 -- # found=1 00:16:45.930 15:35:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:45.930 15:35:16 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:16:45.930 15:35:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:46.188 15:35:16 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:16:46.188 15:35:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:46.188 15:35:16 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:16:46.188 15:35:16 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:46.188 15:35:16 -- setup/devices.sh@66 -- # (( found == 1 )) 00:16:46.188 15:35:16 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:16:46.446 15:35:16 -- setup/devices.sh@68 -- # return 0 00:16:46.446 15:35:16 -- setup/devices.sh@128 -- # cleanup_nvme 00:16:46.446 15:35:16 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:16:46.446 15:35:16 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:16:46.446 15:35:16 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:16:46.446 15:35:16 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:16:46.446 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:16:46.446 00:16:46.446 real 0m4.060s 00:16:46.446 user 0m0.729s 00:16:46.446 sys 0m1.004s 00:16:46.446 15:35:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:46.446 15:35:16 -- common/autotest_common.sh@10 -- # set +x 00:16:46.446 ************************************ 00:16:46.446 END TEST nvme_mount 00:16:46.446 ************************************ 00:16:46.446 15:35:16 -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:16:46.446 15:35:16 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:46.446 15:35:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:46.446 15:35:16 -- common/autotest_common.sh@10 -- # set +x 00:16:46.446 ************************************ 00:16:46.446 START TEST dm_mount 00:16:46.446 ************************************ 00:16:46.446 15:35:16 -- common/autotest_common.sh@1111 -- # dm_mount 00:16:46.446 15:35:16 -- setup/devices.sh@144 -- # pv=nvme0n1 00:16:46.446 15:35:16 -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:16:46.446 15:35:16 -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:16:46.446 15:35:16 -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:16:46.446 15:35:16 -- setup/common.sh@39 -- # local disk=nvme0n1 00:16:46.446 15:35:16 -- setup/common.sh@40 -- # local part_no=2 00:16:46.446 15:35:16 -- setup/common.sh@41 -- # local size=1073741824 00:16:46.446 15:35:16 -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:16:46.446 15:35:16 -- setup/common.sh@44 -- # parts=() 00:16:46.446 15:35:16 -- setup/common.sh@44 -- # local parts 00:16:46.446 15:35:16 -- setup/common.sh@46 -- # (( part = 1 )) 00:16:46.446 15:35:16 -- setup/common.sh@46 -- # (( part <= part_no )) 00:16:46.446 15:35:16 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:16:46.446 15:35:16 -- setup/common.sh@46 -- # (( part++ )) 00:16:46.446 15:35:16 -- setup/common.sh@46 -- # (( part <= part_no )) 00:16:46.446 15:35:16 -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:16:46.446 15:35:16 -- setup/common.sh@46 -- # (( part++ )) 00:16:46.446 15:35:16 -- setup/common.sh@46 -- # (( part <= part_no )) 00:16:46.446 15:35:16 -- setup/common.sh@51 -- # (( size /= 4096 )) 00:16:46.446 15:35:16 -- setup/common.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:16:46.446 15:35:16 -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:16:47.381 Creating new GPT entries in memory. 00:16:47.381 GPT data structures destroyed! You may now partition the disk using fdisk or 00:16:47.381 other utilities. 00:16:47.381 15:35:17 -- setup/common.sh@57 -- # (( part = 1 )) 00:16:47.381 15:35:17 -- setup/common.sh@57 -- # (( part <= part_no )) 00:16:47.381 15:35:17 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:16:47.381 15:35:17 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:16:47.381 15:35:17 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:264191 00:16:48.757 Creating new GPT entries in memory. 00:16:48.757 The operation has completed successfully. 00:16:48.757 15:35:18 -- setup/common.sh@57 -- # (( part++ )) 00:16:48.757 15:35:18 -- setup/common.sh@57 -- # (( part <= part_no )) 00:16:48.757 15:35:18 -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:16:48.757 15:35:18 -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:16:48.757 15:35:18 -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:264192:526335 00:16:49.691 The operation has completed successfully. 00:16:49.691 15:35:19 -- setup/common.sh@57 -- # (( part++ )) 00:16:49.691 15:35:19 -- setup/common.sh@57 -- # (( part <= part_no )) 00:16:49.691 15:35:19 -- setup/common.sh@62 -- # wait 58710 00:16:49.691 15:35:19 -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:16:49.691 15:35:19 -- setup/devices.sh@151 -- # dm_mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:16:49.691 15:35:19 -- setup/devices.sh@152 -- # dm_dummy_test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:16:49.691 15:35:19 -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:16:49.691 15:35:19 -- setup/devices.sh@160 -- # for t in {1..5} 00:16:49.691 15:35:19 -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:16:49.691 15:35:19 -- setup/devices.sh@161 -- # break 00:16:49.691 15:35:19 -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:16:49.691 15:35:19 -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:16:49.691 15:35:19 -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:16:49.691 15:35:19 -- setup/devices.sh@166 -- # dm=dm-0 00:16:49.691 15:35:19 -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:16:49.691 15:35:19 -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:16:49.691 15:35:19 -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:16:49.691 15:35:19 -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount size= 00:16:49.691 15:35:19 -- setup/common.sh@68 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:16:49.691 15:35:19 -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:16:49.691 15:35:19 -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:16:49.691 15:35:19 -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:16:49.691 15:35:19 -- setup/devices.sh@174 -- # verify 0000:00:11.0 nvme0n1:nvme_dm_test /home/vagrant/spdk_repo/spdk/test/setup/dm_mount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:16:49.691 15:35:19 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:16:49.691 15:35:19 -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:16:49.691 15:35:19 -- setup/devices.sh@50 -- # local mount_point=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:16:49.691 15:35:19 -- setup/devices.sh@51 -- # local test_file=/home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:16:49.691 15:35:19 -- setup/devices.sh@53 -- # local found=0 00:16:49.691 15:35:19 -- setup/devices.sh@55 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:16:49.691 15:35:19 -- setup/devices.sh@56 -- # : 00:16:49.691 15:35:19 -- setup/devices.sh@59 -- # local pci status 00:16:49.691 15:35:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:49.691 15:35:19 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:16:49.691 15:35:19 -- setup/devices.sh@47 -- # setup output config 00:16:49.691 15:35:19 -- setup/common.sh@9 -- # [[ output == output ]] 00:16:49.691 15:35:19 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:16:49.949 15:35:19 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:16:49.949 15:35:19 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:16:49.949 15:35:19 -- setup/devices.sh@63 -- # found=1 00:16:49.949 15:35:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:49.949 15:35:19 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:16:49.949 15:35:19 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:49.949 15:35:20 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:16:49.949 15:35:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:49.949 15:35:20 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:16:49.949 15:35:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:50.207 15:35:20 -- setup/devices.sh@66 -- # (( found == 1 )) 00:16:50.207 15:35:20 -- setup/devices.sh@68 -- # [[ -n /home/vagrant/spdk_repo/spdk/test/setup/dm_mount ]] 00:16:50.207 15:35:20 -- setup/devices.sh@71 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:16:50.207 15:35:20 -- setup/devices.sh@73 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm ]] 00:16:50.207 15:35:20 -- setup/devices.sh@74 -- # rm /home/vagrant/spdk_repo/spdk/test/setup/dm_mount/test_dm 00:16:50.207 15:35:20 -- setup/devices.sh@182 -- # umount /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:16:50.207 15:35:20 -- setup/devices.sh@184 -- # verify 0000:00:11.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:16:50.207 15:35:20 -- setup/devices.sh@48 -- # local dev=0000:00:11.0 00:16:50.207 15:35:20 -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:16:50.207 15:35:20 -- setup/devices.sh@50 -- # local mount_point= 00:16:50.207 15:35:20 -- setup/devices.sh@51 -- # local test_file= 00:16:50.207 15:35:20 -- setup/devices.sh@53 -- # local found=0 00:16:50.207 15:35:20 -- setup/devices.sh@55 -- # [[ -n '' ]] 00:16:50.207 15:35:20 -- setup/devices.sh@59 -- # local pci status 00:16:50.207 15:35:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:50.207 15:35:20 -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:00:11.0 00:16:50.207 15:35:20 -- setup/devices.sh@47 -- # setup output config 00:16:50.207 15:35:20 -- setup/common.sh@9 -- # [[ output == output ]] 00:16:50.207 15:35:20 -- setup/common.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh config 00:16:50.207 15:35:20 -- setup/devices.sh@62 -- # [[ 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:16:50.207 15:35:20 -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:16:50.207 15:35:20 -- setup/devices.sh@63 -- # found=1 00:16:50.207 15:35:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:50.207 15:35:20 -- setup/devices.sh@62 -- # [[ 0000:00:10.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:16:50.207 15:35:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:50.466 15:35:20 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:16:50.466 15:35:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:50.466 15:35:20 -- setup/devices.sh@62 -- # [[ 0000:00:03.0 == \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:16:50.466 15:35:20 -- setup/devices.sh@60 -- # read -r pci _ _ status 00:16:50.724 15:35:20 -- setup/devices.sh@66 -- # (( found == 1 )) 00:16:50.724 15:35:20 -- setup/devices.sh@68 -- # [[ -n '' ]] 00:16:50.724 15:35:20 -- setup/devices.sh@68 -- # return 0 00:16:50.724 15:35:20 -- setup/devices.sh@187 -- # cleanup_dm 00:16:50.724 15:35:20 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:16:50.724 15:35:20 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:16:50.724 15:35:20 -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:16:50.724 15:35:20 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:16:50.724 15:35:20 -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:16:50.724 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:16:50.724 15:35:20 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:16:50.724 15:35:20 -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:16:50.724 00:16:50.724 real 0m4.249s 00:16:50.724 user 0m0.440s 00:16:50.724 sys 0m0.705s 00:16:50.724 15:35:20 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:50.724 15:35:20 -- common/autotest_common.sh@10 -- # set +x 00:16:50.724 ************************************ 00:16:50.724 END TEST dm_mount 00:16:50.724 ************************************ 00:16:50.724 15:35:20 -- setup/devices.sh@1 -- # cleanup 00:16:50.724 15:35:20 -- setup/devices.sh@11 -- # cleanup_nvme 00:16:50.724 15:35:20 -- setup/devices.sh@20 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/nvme_mount 00:16:50.724 15:35:20 -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:16:50.724 15:35:20 -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:16:50.724 15:35:20 -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:16:50.724 15:35:20 -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:16:50.982 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:16:50.982 /dev/nvme0n1: 8 bytes were erased at offset 0xfffff000 (gpt): 45 46 49 20 50 41 52 54 00:16:50.982 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:16:50.982 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:16:50.982 15:35:21 -- setup/devices.sh@12 -- # cleanup_dm 00:16:50.982 15:35:21 -- setup/devices.sh@33 -- # mountpoint -q /home/vagrant/spdk_repo/spdk/test/setup/dm_mount 00:16:50.982 15:35:21 -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:16:50.982 15:35:21 -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:16:50.982 15:35:21 -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:16:50.982 15:35:21 -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:16:50.982 15:35:21 -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:16:50.982 00:16:50.982 real 0m9.995s 00:16:50.982 user 0m1.830s 00:16:50.982 sys 0m2.410s 00:16:50.982 15:35:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:50.982 15:35:21 -- common/autotest_common.sh@10 -- # set +x 00:16:50.982 ************************************ 00:16:50.982 END TEST devices 00:16:50.982 ************************************ 00:16:50.982 00:16:50.982 real 0m23.050s 00:16:50.982 user 0m7.453s 00:16:50.982 sys 0m9.653s 00:16:50.982 15:35:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:50.982 15:35:21 -- common/autotest_common.sh@10 -- # set +x 00:16:50.982 ************************************ 00:16:50.982 END TEST setup.sh 00:16:50.982 ************************************ 00:16:51.241 15:35:21 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:16:51.807 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:51.807 Hugepages 00:16:51.807 node hugesize free / total 00:16:51.807 node0 1048576kB 0 / 0 00:16:51.807 node0 2048kB 2048 / 2048 00:16:51.807 00:16:51.807 Type BDF Vendor Device NUMA Driver Device Block devices 00:16:51.807 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:16:51.807 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme1 nvme1n1 00:16:52.066 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme0 nvme0n1 nvme0n2 nvme0n3 00:16:52.066 15:35:22 -- spdk/autotest.sh@130 -- # uname -s 00:16:52.066 15:35:22 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:16:52.066 15:35:22 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:16:52.066 15:35:22 -- common/autotest_common.sh@1517 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:52.788 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:52.788 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:52.788 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:52.788 15:35:23 -- common/autotest_common.sh@1518 -- # sleep 1 00:16:54.164 15:35:24 -- common/autotest_common.sh@1519 -- # bdfs=() 00:16:54.164 15:35:24 -- common/autotest_common.sh@1519 -- # local bdfs 00:16:54.164 15:35:24 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:16:54.164 15:35:24 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:16:54.164 15:35:24 -- common/autotest_common.sh@1499 -- # bdfs=() 00:16:54.164 15:35:24 -- common/autotest_common.sh@1499 -- # local bdfs 00:16:54.164 15:35:24 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:16:54.164 15:35:24 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:54.164 15:35:24 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:16:54.164 15:35:24 -- common/autotest_common.sh@1501 -- # (( 2 == 0 )) 00:16:54.164 15:35:24 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:16:54.164 15:35:24 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:16:54.164 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:54.164 Waiting for block devices as requested 00:16:54.496 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:16:54.496 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:16:54.496 15:35:24 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:16:54.496 15:35:24 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:16:54.496 15:35:24 -- common/autotest_common.sh@1488 -- # grep 0000:00:10.0/nvme/nvme 00:16:54.497 15:35:24 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:16:54.497 15:35:24 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:16:54.497 15:35:24 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:16:54.497 15:35:24 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:16:54.497 15:35:24 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme1 00:16:54.497 15:35:24 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:16:54.497 15:35:24 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:16:54.497 15:35:24 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:16:54.497 15:35:24 -- common/autotest_common.sh@1531 -- # grep oacs 00:16:54.497 15:35:24 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:16:54.497 15:35:24 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:16:54.497 15:35:24 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:16:54.497 15:35:24 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:16:54.497 15:35:24 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:16:54.497 15:35:24 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:16:54.497 15:35:24 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:16:54.497 15:35:24 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:16:54.497 15:35:24 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:16:54.497 15:35:24 -- common/autotest_common.sh@1543 -- # continue 00:16:54.497 15:35:24 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:16:54.497 15:35:24 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:16:54.497 15:35:24 -- common/autotest_common.sh@1488 -- # grep 0000:00:11.0/nvme/nvme 00:16:54.497 15:35:24 -- common/autotest_common.sh@1488 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:16:54.497 15:35:24 -- common/autotest_common.sh@1488 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:16:54.497 15:35:24 -- common/autotest_common.sh@1489 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:16:54.497 15:35:24 -- common/autotest_common.sh@1493 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:16:54.497 15:35:24 -- common/autotest_common.sh@1493 -- # printf '%s\n' nvme0 00:16:54.497 15:35:24 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:16:54.497 15:35:24 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:16:54.497 15:35:24 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:16:54.497 15:35:24 -- common/autotest_common.sh@1531 -- # grep oacs 00:16:54.497 15:35:24 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:16:54.497 15:35:24 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:16:54.497 15:35:24 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:16:54.497 15:35:24 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:16:54.497 15:35:24 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:16:54.497 15:35:24 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:16:54.497 15:35:24 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:16:54.497 15:35:24 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:16:54.497 15:35:24 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:16:54.497 15:35:24 -- common/autotest_common.sh@1543 -- # continue 00:16:54.828 15:35:24 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:16:54.828 15:35:24 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:54.828 15:35:24 -- common/autotest_common.sh@10 -- # set +x 00:16:54.828 15:35:24 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:16:54.828 15:35:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:54.828 15:35:24 -- common/autotest_common.sh@10 -- # set +x 00:16:54.828 15:35:24 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:55.394 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:55.394 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:55.394 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:55.394 15:35:25 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:16:55.394 15:35:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:16:55.394 15:35:25 -- common/autotest_common.sh@10 -- # set +x 00:16:55.652 15:35:25 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:16:55.652 15:35:25 -- common/autotest_common.sh@1577 -- # mapfile -t bdfs 00:16:55.652 15:35:25 -- common/autotest_common.sh@1577 -- # get_nvme_bdfs_by_id 0x0a54 00:16:55.652 15:35:25 -- common/autotest_common.sh@1563 -- # bdfs=() 00:16:55.652 15:35:25 -- common/autotest_common.sh@1563 -- # local bdfs 00:16:55.652 15:35:25 -- common/autotest_common.sh@1565 -- # get_nvme_bdfs 00:16:55.652 15:35:25 -- common/autotest_common.sh@1499 -- # bdfs=() 00:16:55.652 15:35:25 -- common/autotest_common.sh@1499 -- # local bdfs 00:16:55.652 15:35:25 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:16:55.652 15:35:25 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:16:55.652 15:35:25 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:16:55.652 15:35:25 -- common/autotest_common.sh@1501 -- # (( 2 == 0 )) 00:16:55.652 15:35:25 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:16:55.652 15:35:25 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:16:55.652 15:35:25 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:16:55.652 15:35:25 -- common/autotest_common.sh@1566 -- # device=0x0010 00:16:55.652 15:35:25 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:16:55.652 15:35:25 -- common/autotest_common.sh@1565 -- # for bdf in $(get_nvme_bdfs) 00:16:55.652 15:35:25 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:16:55.652 15:35:25 -- common/autotest_common.sh@1566 -- # device=0x0010 00:16:55.652 15:35:25 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:16:55.652 15:35:25 -- common/autotest_common.sh@1572 -- # printf '%s\n' 00:16:55.652 15:35:25 -- common/autotest_common.sh@1578 -- # [[ -z '' ]] 00:16:55.652 15:35:25 -- common/autotest_common.sh@1579 -- # return 0 00:16:55.652 15:35:25 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:16:55.652 15:35:25 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:16:55.652 15:35:25 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:16:55.652 15:35:25 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:16:55.652 15:35:25 -- spdk/autotest.sh@162 -- # timing_enter lib 00:16:55.652 15:35:25 -- common/autotest_common.sh@710 -- # xtrace_disable 00:16:55.652 15:35:25 -- common/autotest_common.sh@10 -- # set +x 00:16:55.652 15:35:25 -- spdk/autotest.sh@164 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:16:55.652 15:35:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:55.652 15:35:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:55.652 15:35:25 -- common/autotest_common.sh@10 -- # set +x 00:16:55.652 ************************************ 00:16:55.652 START TEST env 00:16:55.652 ************************************ 00:16:55.652 15:35:25 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:16:55.652 * Looking for test storage... 00:16:55.652 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:16:55.652 15:35:25 -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:16:55.652 15:35:25 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:55.652 15:35:25 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:55.652 15:35:25 -- common/autotest_common.sh@10 -- # set +x 00:16:55.909 ************************************ 00:16:55.909 START TEST env_memory 00:16:55.909 ************************************ 00:16:55.909 15:35:26 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:16:55.909 00:16:55.909 00:16:55.909 CUnit - A unit testing framework for C - Version 2.1-3 00:16:55.909 http://cunit.sourceforge.net/ 00:16:55.909 00:16:55.909 00:16:55.909 Suite: memory 00:16:55.909 Test: alloc and free memory map ...[2024-04-26 15:35:26.059056] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:16:55.909 passed 00:16:55.909 Test: mem map translation ...[2024-04-26 15:35:26.090008] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:16:55.909 [2024-04-26 15:35:26.090050] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:16:55.909 [2024-04-26 15:35:26.090107] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:16:55.909 [2024-04-26 15:35:26.090118] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:16:55.909 passed 00:16:55.909 Test: mem map registration ...[2024-04-26 15:35:26.153870] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:16:55.909 [2024-04-26 15:35:26.153911] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:16:55.909 passed 00:16:56.168 Test: mem map adjacent registrations ...passed 00:16:56.168 00:16:56.168 Run Summary: Type Total Ran Passed Failed Inactive 00:16:56.168 suites 1 1 n/a 0 0 00:16:56.168 tests 4 4 4 0 0 00:16:56.168 asserts 152 152 152 0 n/a 00:16:56.168 00:16:56.168 Elapsed time = 0.214 seconds 00:16:56.168 00:16:56.168 real 0m0.227s 00:16:56.168 user 0m0.214s 00:16:56.168 sys 0m0.012s 00:16:56.168 15:35:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:56.168 ************************************ 00:16:56.168 END TEST env_memory 00:16:56.168 15:35:26 -- common/autotest_common.sh@10 -- # set +x 00:16:56.168 ************************************ 00:16:56.168 15:35:26 -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:16:56.168 15:35:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:56.168 15:35:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:56.168 15:35:26 -- common/autotest_common.sh@10 -- # set +x 00:16:56.168 ************************************ 00:16:56.168 START TEST env_vtophys 00:16:56.168 ************************************ 00:16:56.168 15:35:26 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:16:56.168 EAL: lib.eal log level changed from notice to debug 00:16:56.168 EAL: Detected lcore 0 as core 0 on socket 0 00:16:56.168 EAL: Detected lcore 1 as core 0 on socket 0 00:16:56.168 EAL: Detected lcore 2 as core 0 on socket 0 00:16:56.168 EAL: Detected lcore 3 as core 0 on socket 0 00:16:56.168 EAL: Detected lcore 4 as core 0 on socket 0 00:16:56.168 EAL: Detected lcore 5 as core 0 on socket 0 00:16:56.168 EAL: Detected lcore 6 as core 0 on socket 0 00:16:56.168 EAL: Detected lcore 7 as core 0 on socket 0 00:16:56.168 EAL: Detected lcore 8 as core 0 on socket 0 00:16:56.168 EAL: Detected lcore 9 as core 0 on socket 0 00:16:56.168 EAL: Maximum logical cores by configuration: 128 00:16:56.168 EAL: Detected CPU lcores: 10 00:16:56.168 EAL: Detected NUMA nodes: 1 00:16:56.168 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:16:56.168 EAL: Detected shared linkage of DPDK 00:16:56.168 EAL: No shared files mode enabled, IPC will be disabled 00:16:56.168 EAL: Selected IOVA mode 'PA' 00:16:56.168 EAL: Probing VFIO support... 00:16:56.168 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:16:56.168 EAL: VFIO modules not loaded, skipping VFIO support... 00:16:56.168 EAL: Ask a virtual area of 0x2e000 bytes 00:16:56.168 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:16:56.168 EAL: Setting up physically contiguous memory... 00:16:56.168 EAL: Setting maximum number of open files to 524288 00:16:56.168 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:16:56.168 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:16:56.168 EAL: Ask a virtual area of 0x61000 bytes 00:16:56.168 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:16:56.168 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:16:56.168 EAL: Ask a virtual area of 0x400000000 bytes 00:16:56.168 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:16:56.168 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:16:56.168 EAL: Ask a virtual area of 0x61000 bytes 00:16:56.168 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:16:56.168 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:16:56.168 EAL: Ask a virtual area of 0x400000000 bytes 00:16:56.168 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:16:56.168 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:16:56.168 EAL: Ask a virtual area of 0x61000 bytes 00:16:56.168 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:16:56.168 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:16:56.168 EAL: Ask a virtual area of 0x400000000 bytes 00:16:56.168 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:16:56.168 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:16:56.168 EAL: Ask a virtual area of 0x61000 bytes 00:16:56.168 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:16:56.168 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:16:56.168 EAL: Ask a virtual area of 0x400000000 bytes 00:16:56.168 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:16:56.168 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:16:56.168 EAL: Hugepages will be freed exactly as allocated. 00:16:56.168 EAL: No shared files mode enabled, IPC is disabled 00:16:56.168 EAL: No shared files mode enabled, IPC is disabled 00:16:56.426 EAL: TSC frequency is ~2200000 KHz 00:16:56.426 EAL: Main lcore 0 is ready (tid=7f52b2c1fa00;cpuset=[0]) 00:16:56.426 EAL: Trying to obtain current memory policy. 00:16:56.426 EAL: Setting policy MPOL_PREFERRED for socket 0 00:16:56.426 EAL: Restoring previous memory policy: 0 00:16:56.426 EAL: request: mp_malloc_sync 00:16:56.426 EAL: No shared files mode enabled, IPC is disabled 00:16:56.426 EAL: Heap on socket 0 was expanded by 2MB 00:16:56.426 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:16:56.426 EAL: No PCI address specified using 'addr=' in: bus=pci 00:16:56.426 EAL: Mem event callback 'spdk:(nil)' registered 00:16:56.426 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:16:56.426 00:16:56.426 00:16:56.426 CUnit - A unit testing framework for C - Version 2.1-3 00:16:56.426 http://cunit.sourceforge.net/ 00:16:56.426 00:16:56.426 00:16:56.426 Suite: components_suite 00:16:56.426 Test: vtophys_malloc_test ...passed 00:16:56.426 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:16:56.426 EAL: Setting policy MPOL_PREFERRED for socket 0 00:16:56.426 EAL: Restoring previous memory policy: 4 00:16:56.426 EAL: Calling mem event callback 'spdk:(nil)' 00:16:56.426 EAL: request: mp_malloc_sync 00:16:56.426 EAL: No shared files mode enabled, IPC is disabled 00:16:56.426 EAL: Heap on socket 0 was expanded by 4MB 00:16:56.426 EAL: Calling mem event callback 'spdk:(nil)' 00:16:56.426 EAL: request: mp_malloc_sync 00:16:56.426 EAL: No shared files mode enabled, IPC is disabled 00:16:56.426 EAL: Heap on socket 0 was shrunk by 4MB 00:16:56.426 EAL: Trying to obtain current memory policy. 00:16:56.426 EAL: Setting policy MPOL_PREFERRED for socket 0 00:16:56.426 EAL: Restoring previous memory policy: 4 00:16:56.426 EAL: Calling mem event callback 'spdk:(nil)' 00:16:56.426 EAL: request: mp_malloc_sync 00:16:56.426 EAL: No shared files mode enabled, IPC is disabled 00:16:56.426 EAL: Heap on socket 0 was expanded by 6MB 00:16:56.427 EAL: Calling mem event callback 'spdk:(nil)' 00:16:56.427 EAL: request: mp_malloc_sync 00:16:56.427 EAL: No shared files mode enabled, IPC is disabled 00:16:56.427 EAL: Heap on socket 0 was shrunk by 6MB 00:16:56.427 EAL: Trying to obtain current memory policy. 00:16:56.427 EAL: Setting policy MPOL_PREFERRED for socket 0 00:16:56.427 EAL: Restoring previous memory policy: 4 00:16:56.427 EAL: Calling mem event callback 'spdk:(nil)' 00:16:56.427 EAL: request: mp_malloc_sync 00:16:56.427 EAL: No shared files mode enabled, IPC is disabled 00:16:56.427 EAL: Heap on socket 0 was expanded by 10MB 00:16:56.427 EAL: Calling mem event callback 'spdk:(nil)' 00:16:56.427 EAL: request: mp_malloc_sync 00:16:56.427 EAL: No shared files mode enabled, IPC is disabled 00:16:56.427 EAL: Heap on socket 0 was shrunk by 10MB 00:16:56.427 EAL: Trying to obtain current memory policy. 00:16:56.427 EAL: Setting policy MPOL_PREFERRED for socket 0 00:16:56.427 EAL: Restoring previous memory policy: 4 00:16:56.427 EAL: Calling mem event callback 'spdk:(nil)' 00:16:56.427 EAL: request: mp_malloc_sync 00:16:56.427 EAL: No shared files mode enabled, IPC is disabled 00:16:56.427 EAL: Heap on socket 0 was expanded by 18MB 00:16:56.427 EAL: Calling mem event callback 'spdk:(nil)' 00:16:56.427 EAL: request: mp_malloc_sync 00:16:56.427 EAL: No shared files mode enabled, IPC is disabled 00:16:56.427 EAL: Heap on socket 0 was shrunk by 18MB 00:16:56.427 EAL: Trying to obtain current memory policy. 00:16:56.427 EAL: Setting policy MPOL_PREFERRED for socket 0 00:16:56.427 EAL: Restoring previous memory policy: 4 00:16:56.427 EAL: Calling mem event callback 'spdk:(nil)' 00:16:56.427 EAL: request: mp_malloc_sync 00:16:56.427 EAL: No shared files mode enabled, IPC is disabled 00:16:56.427 EAL: Heap on socket 0 was expanded by 34MB 00:16:56.427 EAL: Calling mem event callback 'spdk:(nil)' 00:16:56.427 EAL: request: mp_malloc_sync 00:16:56.427 EAL: No shared files mode enabled, IPC is disabled 00:16:56.427 EAL: Heap on socket 0 was shrunk by 34MB 00:16:56.427 EAL: Trying to obtain current memory policy. 00:16:56.427 EAL: Setting policy MPOL_PREFERRED for socket 0 00:16:56.427 EAL: Restoring previous memory policy: 4 00:16:56.427 EAL: Calling mem event callback 'spdk:(nil)' 00:16:56.427 EAL: request: mp_malloc_sync 00:16:56.427 EAL: No shared files mode enabled, IPC is disabled 00:16:56.427 EAL: Heap on socket 0 was expanded by 66MB 00:16:56.427 EAL: Calling mem event callback 'spdk:(nil)' 00:16:56.427 EAL: request: mp_malloc_sync 00:16:56.427 EAL: No shared files mode enabled, IPC is disabled 00:16:56.427 EAL: Heap on socket 0 was shrunk by 66MB 00:16:56.427 EAL: Trying to obtain current memory policy. 00:16:56.427 EAL: Setting policy MPOL_PREFERRED for socket 0 00:16:56.427 EAL: Restoring previous memory policy: 4 00:16:56.427 EAL: Calling mem event callback 'spdk:(nil)' 00:16:56.427 EAL: request: mp_malloc_sync 00:16:56.427 EAL: No shared files mode enabled, IPC is disabled 00:16:56.427 EAL: Heap on socket 0 was expanded by 130MB 00:16:56.427 EAL: Calling mem event callback 'spdk:(nil)' 00:16:56.427 EAL: request: mp_malloc_sync 00:16:56.427 EAL: No shared files mode enabled, IPC is disabled 00:16:56.427 EAL: Heap on socket 0 was shrunk by 130MB 00:16:56.427 EAL: Trying to obtain current memory policy. 00:16:56.427 EAL: Setting policy MPOL_PREFERRED for socket 0 00:16:56.685 EAL: Restoring previous memory policy: 4 00:16:56.685 EAL: Calling mem event callback 'spdk:(nil)' 00:16:56.685 EAL: request: mp_malloc_sync 00:16:56.685 EAL: No shared files mode enabled, IPC is disabled 00:16:56.685 EAL: Heap on socket 0 was expanded by 258MB 00:16:56.685 EAL: Calling mem event callback 'spdk:(nil)' 00:16:56.685 EAL: request: mp_malloc_sync 00:16:56.685 EAL: No shared files mode enabled, IPC is disabled 00:16:56.685 EAL: Heap on socket 0 was shrunk by 258MB 00:16:56.685 EAL: Trying to obtain current memory policy. 00:16:56.685 EAL: Setting policy MPOL_PREFERRED for socket 0 00:16:56.942 EAL: Restoring previous memory policy: 4 00:16:56.942 EAL: Calling mem event callback 'spdk:(nil)' 00:16:56.942 EAL: request: mp_malloc_sync 00:16:56.942 EAL: No shared files mode enabled, IPC is disabled 00:16:56.942 EAL: Heap on socket 0 was expanded by 514MB 00:16:56.942 EAL: Calling mem event callback 'spdk:(nil)' 00:16:56.942 EAL: request: mp_malloc_sync 00:16:56.942 EAL: No shared files mode enabled, IPC is disabled 00:16:56.942 EAL: Heap on socket 0 was shrunk by 514MB 00:16:56.942 EAL: Trying to obtain current memory policy. 00:16:56.942 EAL: Setting policy MPOL_PREFERRED for socket 0 00:16:57.509 EAL: Restoring previous memory policy: 4 00:16:57.509 EAL: Calling mem event callback 'spdk:(nil)' 00:16:57.509 EAL: request: mp_malloc_sync 00:16:57.509 EAL: No shared files mode enabled, IPC is disabled 00:16:57.509 EAL: Heap on socket 0 was expanded by 1026MB 00:16:57.509 EAL: Calling mem event callback 'spdk:(nil)' 00:16:57.766 EAL: request: mp_malloc_sync 00:16:57.766 EAL: No shared files mode enabled, IPC is disabled 00:16:57.766 EAL: Heap on socket 0 was shrunk by 1026MB 00:16:57.766 passed 00:16:57.766 00:16:57.766 Run Summary: Type Total Ran Passed Failed Inactive 00:16:57.766 suites 1 1 n/a 0 0 00:16:57.766 tests 2 2 2 0 0 00:16:57.766 asserts 5197 5197 5197 0 n/a 00:16:57.766 00:16:57.766 Elapsed time = 1.388 seconds 00:16:57.766 EAL: Calling mem event callback 'spdk:(nil)' 00:16:57.766 EAL: request: mp_malloc_sync 00:16:57.766 EAL: No shared files mode enabled, IPC is disabled 00:16:57.766 EAL: Heap on socket 0 was shrunk by 2MB 00:16:57.766 EAL: No shared files mode enabled, IPC is disabled 00:16:57.766 EAL: No shared files mode enabled, IPC is disabled 00:16:57.766 EAL: No shared files mode enabled, IPC is disabled 00:16:57.766 00:16:57.766 real 0m1.594s 00:16:57.766 user 0m0.886s 00:16:57.766 sys 0m0.570s 00:16:57.766 15:35:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:57.766 15:35:27 -- common/autotest_common.sh@10 -- # set +x 00:16:57.766 ************************************ 00:16:57.766 END TEST env_vtophys 00:16:57.766 ************************************ 00:16:57.766 15:35:27 -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:16:57.766 15:35:27 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:57.766 15:35:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:57.766 15:35:27 -- common/autotest_common.sh@10 -- # set +x 00:16:58.024 ************************************ 00:16:58.024 START TEST env_pci 00:16:58.024 ************************************ 00:16:58.024 15:35:28 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:16:58.024 00:16:58.024 00:16:58.024 CUnit - A unit testing framework for C - Version 2.1-3 00:16:58.024 http://cunit.sourceforge.net/ 00:16:58.024 00:16:58.024 00:16:58.024 Suite: pci 00:16:58.024 Test: pci_hook ...[2024-04-26 15:35:28.087787] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 59929 has claimed it 00:16:58.024 passed 00:16:58.024 00:16:58.025 Run Summary: Type Total Ran Passed Failed Inactive 00:16:58.025 suites 1 1 n/a 0 0 00:16:58.025 tests 1 1 1 0 0 00:16:58.025 asserts 25 25 25 0 n/a 00:16:58.025 00:16:58.025 Elapsed time = 0.002 seconds 00:16:58.025 EAL: Cannot find device (10000:00:01.0) 00:16:58.025 EAL: Failed to attach device on primary process 00:16:58.025 00:16:58.025 real 0m0.016s 00:16:58.025 user 0m0.009s 00:16:58.025 sys 0m0.006s 00:16:58.025 15:35:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:58.025 ************************************ 00:16:58.025 15:35:28 -- common/autotest_common.sh@10 -- # set +x 00:16:58.025 END TEST env_pci 00:16:58.025 ************************************ 00:16:58.025 15:35:28 -- env/env.sh@14 -- # argv='-c 0x1 ' 00:16:58.025 15:35:28 -- env/env.sh@15 -- # uname 00:16:58.025 15:35:28 -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:16:58.025 15:35:28 -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:16:58.025 15:35:28 -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:16:58.025 15:35:28 -- common/autotest_common.sh@1087 -- # '[' 5 -le 1 ']' 00:16:58.025 15:35:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:58.025 15:35:28 -- common/autotest_common.sh@10 -- # set +x 00:16:58.025 ************************************ 00:16:58.025 START TEST env_dpdk_post_init 00:16:58.025 ************************************ 00:16:58.025 15:35:28 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:16:58.025 EAL: Detected CPU lcores: 10 00:16:58.025 EAL: Detected NUMA nodes: 1 00:16:58.025 EAL: Detected shared linkage of DPDK 00:16:58.025 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:16:58.025 EAL: Selected IOVA mode 'PA' 00:16:58.283 TELEMETRY: No legacy callbacks, legacy socket not created 00:16:58.283 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:16:58.283 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:16:58.283 Starting DPDK initialization... 00:16:58.283 Starting SPDK post initialization... 00:16:58.283 SPDK NVMe probe 00:16:58.283 Attaching to 0000:00:10.0 00:16:58.283 Attaching to 0000:00:11.0 00:16:58.283 Attached to 0000:00:10.0 00:16:58.283 Attached to 0000:00:11.0 00:16:58.283 Cleaning up... 00:16:58.283 00:16:58.283 real 0m0.171s 00:16:58.283 user 0m0.041s 00:16:58.283 sys 0m0.031s 00:16:58.283 15:35:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:58.283 15:35:28 -- common/autotest_common.sh@10 -- # set +x 00:16:58.283 ************************************ 00:16:58.283 END TEST env_dpdk_post_init 00:16:58.283 ************************************ 00:16:58.283 15:35:28 -- env/env.sh@26 -- # uname 00:16:58.283 15:35:28 -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:16:58.283 15:35:28 -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:16:58.283 15:35:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:58.283 15:35:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:58.283 15:35:28 -- common/autotest_common.sh@10 -- # set +x 00:16:58.283 ************************************ 00:16:58.283 START TEST env_mem_callbacks 00:16:58.283 ************************************ 00:16:58.283 15:35:28 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:16:58.283 EAL: Detected CPU lcores: 10 00:16:58.283 EAL: Detected NUMA nodes: 1 00:16:58.283 EAL: Detected shared linkage of DPDK 00:16:58.283 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:16:58.283 EAL: Selected IOVA mode 'PA' 00:16:58.541 TELEMETRY: No legacy callbacks, legacy socket not created 00:16:58.541 00:16:58.541 00:16:58.542 CUnit - A unit testing framework for C - Version 2.1-3 00:16:58.542 http://cunit.sourceforge.net/ 00:16:58.542 00:16:58.542 00:16:58.542 Suite: memory 00:16:58.542 Test: test ... 00:16:58.542 register 0x200000200000 2097152 00:16:58.542 malloc 3145728 00:16:58.542 register 0x200000400000 4194304 00:16:58.542 buf 0x200000500000 len 3145728 PASSED 00:16:58.542 malloc 64 00:16:58.542 buf 0x2000004fff40 len 64 PASSED 00:16:58.542 malloc 4194304 00:16:58.542 register 0x200000800000 6291456 00:16:58.542 buf 0x200000a00000 len 4194304 PASSED 00:16:58.542 free 0x200000500000 3145728 00:16:58.542 free 0x2000004fff40 64 00:16:58.542 unregister 0x200000400000 4194304 PASSED 00:16:58.542 free 0x200000a00000 4194304 00:16:58.542 unregister 0x200000800000 6291456 PASSED 00:16:58.542 malloc 8388608 00:16:58.542 register 0x200000400000 10485760 00:16:58.542 buf 0x200000600000 len 8388608 PASSED 00:16:58.542 free 0x200000600000 8388608 00:16:58.542 unregister 0x200000400000 10485760 PASSED 00:16:58.542 passed 00:16:58.542 00:16:58.542 Run Summary: Type Total Ran Passed Failed Inactive 00:16:58.542 suites 1 1 n/a 0 0 00:16:58.542 tests 1 1 1 0 0 00:16:58.542 asserts 15 15 15 0 n/a 00:16:58.542 00:16:58.542 Elapsed time = 0.009 seconds 00:16:58.542 00:16:58.542 real 0m0.140s 00:16:58.542 user 0m0.018s 00:16:58.542 sys 0m0.021s 00:16:58.542 15:35:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:58.542 15:35:28 -- common/autotest_common.sh@10 -- # set +x 00:16:58.542 ************************************ 00:16:58.542 END TEST env_mem_callbacks 00:16:58.542 ************************************ 00:16:58.542 00:16:58.542 real 0m2.830s 00:16:58.542 user 0m1.406s 00:16:58.542 sys 0m0.999s 00:16:58.542 15:35:28 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:16:58.542 15:35:28 -- common/autotest_common.sh@10 -- # set +x 00:16:58.542 ************************************ 00:16:58.542 END TEST env 00:16:58.542 ************************************ 00:16:58.542 15:35:28 -- spdk/autotest.sh@165 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:16:58.542 15:35:28 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:58.542 15:35:28 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:58.542 15:35:28 -- common/autotest_common.sh@10 -- # set +x 00:16:58.542 ************************************ 00:16:58.542 START TEST rpc 00:16:58.542 ************************************ 00:16:58.542 15:35:28 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:16:58.800 * Looking for test storage... 00:16:58.800 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:16:58.800 15:35:28 -- rpc/rpc.sh@65 -- # spdk_pid=60052 00:16:58.801 15:35:28 -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:16:58.801 15:35:28 -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:16:58.801 15:35:28 -- rpc/rpc.sh@67 -- # waitforlisten 60052 00:16:58.801 15:35:28 -- common/autotest_common.sh@817 -- # '[' -z 60052 ']' 00:16:58.801 15:35:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:58.801 15:35:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:16:58.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:58.801 15:35:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:58.801 15:35:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:16:58.801 15:35:28 -- common/autotest_common.sh@10 -- # set +x 00:16:58.801 [2024-04-26 15:35:28.946108] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:16:58.801 [2024-04-26 15:35:28.946228] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60052 ] 00:16:58.801 [2024-04-26 15:35:29.086582] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:59.059 [2024-04-26 15:35:29.211422] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:16:59.059 [2024-04-26 15:35:29.211531] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 60052' to capture a snapshot of events at runtime. 00:16:59.059 [2024-04-26 15:35:29.211543] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:59.059 [2024-04-26 15:35:29.211552] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:59.059 [2024-04-26 15:35:29.211560] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid60052 for offline analysis/debug. 00:16:59.059 [2024-04-26 15:35:29.211597] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:16:59.995 15:35:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:16:59.995 15:35:29 -- common/autotest_common.sh@850 -- # return 0 00:16:59.995 15:35:29 -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:16:59.995 15:35:29 -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:16:59.995 15:35:29 -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:16:59.995 15:35:29 -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:16:59.995 15:35:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:16:59.995 15:35:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:16:59.995 15:35:29 -- common/autotest_common.sh@10 -- # set +x 00:16:59.995 ************************************ 00:16:59.995 START TEST rpc_integrity 00:16:59.995 ************************************ 00:16:59.995 15:35:30 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:16:59.995 15:35:30 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:16:59.995 15:35:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:59.995 15:35:30 -- common/autotest_common.sh@10 -- # set +x 00:16:59.995 15:35:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:59.995 15:35:30 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:16:59.995 15:35:30 -- rpc/rpc.sh@13 -- # jq length 00:16:59.995 15:35:30 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:16:59.995 15:35:30 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:16:59.995 15:35:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:59.995 15:35:30 -- common/autotest_common.sh@10 -- # set +x 00:16:59.995 15:35:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:59.995 15:35:30 -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:16:59.995 15:35:30 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:16:59.995 15:35:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:59.995 15:35:30 -- common/autotest_common.sh@10 -- # set +x 00:16:59.995 15:35:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:59.995 15:35:30 -- rpc/rpc.sh@16 -- # bdevs='[ 00:16:59.995 { 00:16:59.995 "aliases": [ 00:16:59.995 "fb5edd4b-f6de-424d-ac86-9fb1182ea9fc" 00:16:59.995 ], 00:16:59.995 "assigned_rate_limits": { 00:16:59.995 "r_mbytes_per_sec": 0, 00:16:59.995 "rw_ios_per_sec": 0, 00:16:59.995 "rw_mbytes_per_sec": 0, 00:16:59.995 "w_mbytes_per_sec": 0 00:16:59.995 }, 00:16:59.995 "block_size": 512, 00:16:59.995 "claimed": false, 00:16:59.995 "driver_specific": {}, 00:16:59.995 "memory_domains": [ 00:16:59.995 { 00:16:59.995 "dma_device_id": "system", 00:16:59.995 "dma_device_type": 1 00:16:59.995 }, 00:16:59.995 { 00:16:59.995 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:59.995 "dma_device_type": 2 00:16:59.995 } 00:16:59.995 ], 00:16:59.995 "name": "Malloc0", 00:16:59.995 "num_blocks": 16384, 00:16:59.995 "product_name": "Malloc disk", 00:16:59.995 "supported_io_types": { 00:16:59.995 "abort": true, 00:16:59.995 "compare": false, 00:16:59.995 "compare_and_write": false, 00:16:59.995 "flush": true, 00:16:59.995 "nvme_admin": false, 00:16:59.995 "nvme_io": false, 00:16:59.995 "read": true, 00:16:59.995 "reset": true, 00:16:59.995 "unmap": true, 00:16:59.995 "write": true, 00:16:59.995 "write_zeroes": true 00:16:59.995 }, 00:16:59.995 "uuid": "fb5edd4b-f6de-424d-ac86-9fb1182ea9fc", 00:16:59.995 "zoned": false 00:16:59.995 } 00:16:59.995 ]' 00:16:59.995 15:35:30 -- rpc/rpc.sh@17 -- # jq length 00:16:59.995 15:35:30 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:16:59.995 15:35:30 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:16:59.995 15:35:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:59.995 15:35:30 -- common/autotest_common.sh@10 -- # set +x 00:16:59.995 [2024-04-26 15:35:30.189728] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:16:59.995 [2024-04-26 15:35:30.189786] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:59.995 [2024-04-26 15:35:30.189806] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1513bd0 00:16:59.995 [2024-04-26 15:35:30.189816] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:59.995 [2024-04-26 15:35:30.191793] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:59.995 [2024-04-26 15:35:30.191846] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:16:59.995 Passthru0 00:16:59.995 15:35:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:59.995 15:35:30 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:16:59.995 15:35:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:59.995 15:35:30 -- common/autotest_common.sh@10 -- # set +x 00:16:59.995 15:35:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:16:59.995 15:35:30 -- rpc/rpc.sh@20 -- # bdevs='[ 00:16:59.995 { 00:16:59.995 "aliases": [ 00:16:59.995 "fb5edd4b-f6de-424d-ac86-9fb1182ea9fc" 00:16:59.995 ], 00:16:59.995 "assigned_rate_limits": { 00:16:59.995 "r_mbytes_per_sec": 0, 00:16:59.995 "rw_ios_per_sec": 0, 00:16:59.995 "rw_mbytes_per_sec": 0, 00:16:59.995 "w_mbytes_per_sec": 0 00:16:59.995 }, 00:16:59.995 "block_size": 512, 00:16:59.995 "claim_type": "exclusive_write", 00:16:59.995 "claimed": true, 00:16:59.995 "driver_specific": {}, 00:16:59.995 "memory_domains": [ 00:16:59.995 { 00:16:59.995 "dma_device_id": "system", 00:16:59.995 "dma_device_type": 1 00:16:59.995 }, 00:16:59.995 { 00:16:59.995 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:59.995 "dma_device_type": 2 00:16:59.995 } 00:16:59.995 ], 00:16:59.995 "name": "Malloc0", 00:16:59.995 "num_blocks": 16384, 00:16:59.995 "product_name": "Malloc disk", 00:16:59.995 "supported_io_types": { 00:16:59.995 "abort": true, 00:16:59.995 "compare": false, 00:16:59.995 "compare_and_write": false, 00:16:59.995 "flush": true, 00:16:59.995 "nvme_admin": false, 00:16:59.995 "nvme_io": false, 00:16:59.995 "read": true, 00:16:59.995 "reset": true, 00:16:59.995 "unmap": true, 00:16:59.995 "write": true, 00:16:59.995 "write_zeroes": true 00:16:59.995 }, 00:16:59.995 "uuid": "fb5edd4b-f6de-424d-ac86-9fb1182ea9fc", 00:16:59.995 "zoned": false 00:16:59.995 }, 00:16:59.995 { 00:16:59.995 "aliases": [ 00:16:59.995 "0fdb648f-44c6-5efb-9863-85746022e84f" 00:16:59.995 ], 00:16:59.995 "assigned_rate_limits": { 00:16:59.995 "r_mbytes_per_sec": 0, 00:16:59.995 "rw_ios_per_sec": 0, 00:16:59.995 "rw_mbytes_per_sec": 0, 00:16:59.995 "w_mbytes_per_sec": 0 00:16:59.995 }, 00:16:59.995 "block_size": 512, 00:16:59.995 "claimed": false, 00:16:59.995 "driver_specific": { 00:16:59.995 "passthru": { 00:16:59.995 "base_bdev_name": "Malloc0", 00:16:59.995 "name": "Passthru0" 00:16:59.995 } 00:16:59.995 }, 00:16:59.995 "memory_domains": [ 00:16:59.995 { 00:16:59.995 "dma_device_id": "system", 00:16:59.995 "dma_device_type": 1 00:16:59.995 }, 00:16:59.995 { 00:16:59.995 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:59.995 "dma_device_type": 2 00:16:59.995 } 00:16:59.995 ], 00:16:59.995 "name": "Passthru0", 00:16:59.995 "num_blocks": 16384, 00:16:59.995 "product_name": "passthru", 00:16:59.995 "supported_io_types": { 00:16:59.995 "abort": true, 00:16:59.995 "compare": false, 00:16:59.995 "compare_and_write": false, 00:16:59.995 "flush": true, 00:16:59.995 "nvme_admin": false, 00:16:59.995 "nvme_io": false, 00:16:59.995 "read": true, 00:16:59.995 "reset": true, 00:16:59.995 "unmap": true, 00:16:59.995 "write": true, 00:16:59.995 "write_zeroes": true 00:16:59.995 }, 00:16:59.995 "uuid": "0fdb648f-44c6-5efb-9863-85746022e84f", 00:16:59.995 "zoned": false 00:16:59.995 } 00:16:59.995 ]' 00:16:59.995 15:35:30 -- rpc/rpc.sh@21 -- # jq length 00:16:59.995 15:35:30 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:16:59.995 15:35:30 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:16:59.995 15:35:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:16:59.995 15:35:30 -- common/autotest_common.sh@10 -- # set +x 00:17:00.252 15:35:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:00.252 15:35:30 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:17:00.252 15:35:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:00.252 15:35:30 -- common/autotest_common.sh@10 -- # set +x 00:17:00.252 15:35:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:00.252 15:35:30 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:17:00.252 15:35:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:00.252 15:35:30 -- common/autotest_common.sh@10 -- # set +x 00:17:00.252 15:35:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:00.252 15:35:30 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:17:00.252 15:35:30 -- rpc/rpc.sh@26 -- # jq length 00:17:00.252 15:35:30 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:17:00.252 00:17:00.252 real 0m0.330s 00:17:00.252 user 0m0.212s 00:17:00.252 sys 0m0.043s 00:17:00.252 15:35:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:00.252 ************************************ 00:17:00.252 END TEST rpc_integrity 00:17:00.252 ************************************ 00:17:00.252 15:35:30 -- common/autotest_common.sh@10 -- # set +x 00:17:00.252 15:35:30 -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:17:00.252 15:35:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:00.252 15:35:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:00.252 15:35:30 -- common/autotest_common.sh@10 -- # set +x 00:17:00.252 ************************************ 00:17:00.252 START TEST rpc_plugins 00:17:00.252 ************************************ 00:17:00.252 15:35:30 -- common/autotest_common.sh@1111 -- # rpc_plugins 00:17:00.252 15:35:30 -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:17:00.252 15:35:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:00.252 15:35:30 -- common/autotest_common.sh@10 -- # set +x 00:17:00.252 15:35:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:00.252 15:35:30 -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:17:00.252 15:35:30 -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:17:00.252 15:35:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:00.252 15:35:30 -- common/autotest_common.sh@10 -- # set +x 00:17:00.252 15:35:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:00.252 15:35:30 -- rpc/rpc.sh@31 -- # bdevs='[ 00:17:00.252 { 00:17:00.252 "aliases": [ 00:17:00.252 "c16d49b9-65ca-4086-a5ae-5a202f221724" 00:17:00.252 ], 00:17:00.252 "assigned_rate_limits": { 00:17:00.252 "r_mbytes_per_sec": 0, 00:17:00.252 "rw_ios_per_sec": 0, 00:17:00.252 "rw_mbytes_per_sec": 0, 00:17:00.252 "w_mbytes_per_sec": 0 00:17:00.252 }, 00:17:00.252 "block_size": 4096, 00:17:00.252 "claimed": false, 00:17:00.252 "driver_specific": {}, 00:17:00.252 "memory_domains": [ 00:17:00.252 { 00:17:00.252 "dma_device_id": "system", 00:17:00.252 "dma_device_type": 1 00:17:00.252 }, 00:17:00.252 { 00:17:00.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:00.252 "dma_device_type": 2 00:17:00.252 } 00:17:00.252 ], 00:17:00.252 "name": "Malloc1", 00:17:00.252 "num_blocks": 256, 00:17:00.252 "product_name": "Malloc disk", 00:17:00.252 "supported_io_types": { 00:17:00.252 "abort": true, 00:17:00.252 "compare": false, 00:17:00.252 "compare_and_write": false, 00:17:00.252 "flush": true, 00:17:00.252 "nvme_admin": false, 00:17:00.252 "nvme_io": false, 00:17:00.252 "read": true, 00:17:00.252 "reset": true, 00:17:00.252 "unmap": true, 00:17:00.252 "write": true, 00:17:00.252 "write_zeroes": true 00:17:00.252 }, 00:17:00.252 "uuid": "c16d49b9-65ca-4086-a5ae-5a202f221724", 00:17:00.252 "zoned": false 00:17:00.252 } 00:17:00.252 ]' 00:17:00.252 15:35:30 -- rpc/rpc.sh@32 -- # jq length 00:17:00.510 15:35:30 -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:17:00.510 15:35:30 -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:17:00.510 15:35:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:00.510 15:35:30 -- common/autotest_common.sh@10 -- # set +x 00:17:00.510 15:35:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:00.510 15:35:30 -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:17:00.510 15:35:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:00.510 15:35:30 -- common/autotest_common.sh@10 -- # set +x 00:17:00.510 15:35:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:00.510 15:35:30 -- rpc/rpc.sh@35 -- # bdevs='[]' 00:17:00.510 15:35:30 -- rpc/rpc.sh@36 -- # jq length 00:17:00.510 15:35:30 -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:17:00.510 00:17:00.510 real 0m0.167s 00:17:00.510 user 0m0.103s 00:17:00.510 sys 0m0.029s 00:17:00.510 15:35:30 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:00.510 ************************************ 00:17:00.510 END TEST rpc_plugins 00:17:00.510 ************************************ 00:17:00.510 15:35:30 -- common/autotest_common.sh@10 -- # set +x 00:17:00.510 15:35:30 -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:17:00.510 15:35:30 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:00.510 15:35:30 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:00.510 15:35:30 -- common/autotest_common.sh@10 -- # set +x 00:17:00.510 ************************************ 00:17:00.510 START TEST rpc_trace_cmd_test 00:17:00.510 ************************************ 00:17:00.510 15:35:30 -- common/autotest_common.sh@1111 -- # rpc_trace_cmd_test 00:17:00.510 15:35:30 -- rpc/rpc.sh@40 -- # local info 00:17:00.510 15:35:30 -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:17:00.510 15:35:30 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:00.510 15:35:30 -- common/autotest_common.sh@10 -- # set +x 00:17:00.510 15:35:30 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:00.510 15:35:30 -- rpc/rpc.sh@42 -- # info='{ 00:17:00.510 "bdev": { 00:17:00.510 "mask": "0x8", 00:17:00.510 "tpoint_mask": "0xffffffffffffffff" 00:17:00.510 }, 00:17:00.510 "bdev_nvme": { 00:17:00.510 "mask": "0x4000", 00:17:00.510 "tpoint_mask": "0x0" 00:17:00.510 }, 00:17:00.510 "blobfs": { 00:17:00.510 "mask": "0x80", 00:17:00.510 "tpoint_mask": "0x0" 00:17:00.510 }, 00:17:00.510 "dsa": { 00:17:00.510 "mask": "0x200", 00:17:00.510 "tpoint_mask": "0x0" 00:17:00.510 }, 00:17:00.510 "ftl": { 00:17:00.510 "mask": "0x40", 00:17:00.510 "tpoint_mask": "0x0" 00:17:00.510 }, 00:17:00.510 "iaa": { 00:17:00.510 "mask": "0x1000", 00:17:00.510 "tpoint_mask": "0x0" 00:17:00.510 }, 00:17:00.510 "iscsi_conn": { 00:17:00.510 "mask": "0x2", 00:17:00.510 "tpoint_mask": "0x0" 00:17:00.510 }, 00:17:00.510 "nvme_pcie": { 00:17:00.510 "mask": "0x800", 00:17:00.510 "tpoint_mask": "0x0" 00:17:00.510 }, 00:17:00.510 "nvme_tcp": { 00:17:00.510 "mask": "0x2000", 00:17:00.510 "tpoint_mask": "0x0" 00:17:00.510 }, 00:17:00.510 "nvmf_rdma": { 00:17:00.510 "mask": "0x10", 00:17:00.510 "tpoint_mask": "0x0" 00:17:00.510 }, 00:17:00.510 "nvmf_tcp": { 00:17:00.510 "mask": "0x20", 00:17:00.510 "tpoint_mask": "0x0" 00:17:00.510 }, 00:17:00.510 "scsi": { 00:17:00.510 "mask": "0x4", 00:17:00.510 "tpoint_mask": "0x0" 00:17:00.510 }, 00:17:00.510 "sock": { 00:17:00.510 "mask": "0x8000", 00:17:00.510 "tpoint_mask": "0x0" 00:17:00.510 }, 00:17:00.510 "thread": { 00:17:00.510 "mask": "0x400", 00:17:00.510 "tpoint_mask": "0x0" 00:17:00.510 }, 00:17:00.510 "tpoint_group_mask": "0x8", 00:17:00.510 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid60052" 00:17:00.510 }' 00:17:00.510 15:35:30 -- rpc/rpc.sh@43 -- # jq length 00:17:00.768 15:35:30 -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:17:00.768 15:35:30 -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:17:00.768 15:35:30 -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:17:00.768 15:35:30 -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:17:00.768 15:35:30 -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:17:00.768 15:35:30 -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:17:00.768 15:35:30 -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:17:00.768 15:35:30 -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:17:00.768 15:35:31 -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:17:00.768 00:17:00.768 real 0m0.258s 00:17:00.768 user 0m0.218s 00:17:00.768 sys 0m0.030s 00:17:00.768 15:35:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:00.768 15:35:31 -- common/autotest_common.sh@10 -- # set +x 00:17:00.768 ************************************ 00:17:00.768 END TEST rpc_trace_cmd_test 00:17:00.768 ************************************ 00:17:00.768 15:35:31 -- rpc/rpc.sh@76 -- # [[ 1 -eq 1 ]] 00:17:00.768 15:35:31 -- rpc/rpc.sh@77 -- # run_test go_rpc go_rpc 00:17:00.768 15:35:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:00.768 15:35:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:00.768 15:35:31 -- common/autotest_common.sh@10 -- # set +x 00:17:01.025 ************************************ 00:17:01.025 START TEST go_rpc 00:17:01.025 ************************************ 00:17:01.025 15:35:31 -- common/autotest_common.sh@1111 -- # go_rpc 00:17:01.025 15:35:31 -- rpc/rpc.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:17:01.025 15:35:31 -- rpc/rpc.sh@51 -- # bdevs='[]' 00:17:01.025 15:35:31 -- rpc/rpc.sh@52 -- # jq length 00:17:01.025 15:35:31 -- rpc/rpc.sh@52 -- # '[' 0 == 0 ']' 00:17:01.025 15:35:31 -- rpc/rpc.sh@54 -- # rpc_cmd bdev_malloc_create 8 512 00:17:01.025 15:35:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:01.025 15:35:31 -- common/autotest_common.sh@10 -- # set +x 00:17:01.025 15:35:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:01.025 15:35:31 -- rpc/rpc.sh@54 -- # malloc=Malloc2 00:17:01.025 15:35:31 -- rpc/rpc.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:17:01.025 15:35:31 -- rpc/rpc.sh@56 -- # bdevs='[{"aliases":["411ceb03-9193-4dbe-8d0e-5dd0ee687e0f"],"assigned_rate_limits":{"r_mbytes_per_sec":0,"rw_ios_per_sec":0,"rw_mbytes_per_sec":0,"w_mbytes_per_sec":0},"block_size":512,"claimed":false,"driver_specific":{},"memory_domains":[{"dma_device_id":"system","dma_device_type":1},{"dma_device_id":"SPDK_ACCEL_DMA_DEVICE","dma_device_type":2}],"name":"Malloc2","num_blocks":16384,"product_name":"Malloc disk","supported_io_types":{"abort":true,"compare":false,"compare_and_write":false,"flush":true,"nvme_admin":false,"nvme_io":false,"read":true,"reset":true,"unmap":true,"write":true,"write_zeroes":true},"uuid":"411ceb03-9193-4dbe-8d0e-5dd0ee687e0f","zoned":false}]' 00:17:01.025 15:35:31 -- rpc/rpc.sh@57 -- # jq length 00:17:01.025 15:35:31 -- rpc/rpc.sh@57 -- # '[' 1 == 1 ']' 00:17:01.025 15:35:31 -- rpc/rpc.sh@59 -- # rpc_cmd bdev_malloc_delete Malloc2 00:17:01.025 15:35:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:01.025 15:35:31 -- common/autotest_common.sh@10 -- # set +x 00:17:01.025 15:35:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:01.025 15:35:31 -- rpc/rpc.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_gorpc 00:17:01.025 15:35:31 -- rpc/rpc.sh@60 -- # bdevs='[]' 00:17:01.025 15:35:31 -- rpc/rpc.sh@61 -- # jq length 00:17:01.283 15:35:31 -- rpc/rpc.sh@61 -- # '[' 0 == 0 ']' 00:17:01.283 00:17:01.283 real 0m0.236s 00:17:01.283 user 0m0.157s 00:17:01.283 sys 0m0.039s 00:17:01.283 15:35:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:01.283 15:35:31 -- common/autotest_common.sh@10 -- # set +x 00:17:01.283 ************************************ 00:17:01.283 END TEST go_rpc 00:17:01.283 ************************************ 00:17:01.283 15:35:31 -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:17:01.283 15:35:31 -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:17:01.283 15:35:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:01.283 15:35:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:01.283 15:35:31 -- common/autotest_common.sh@10 -- # set +x 00:17:01.283 ************************************ 00:17:01.283 START TEST rpc_daemon_integrity 00:17:01.283 ************************************ 00:17:01.283 15:35:31 -- common/autotest_common.sh@1111 -- # rpc_integrity 00:17:01.283 15:35:31 -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:17:01.283 15:35:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:01.283 15:35:31 -- common/autotest_common.sh@10 -- # set +x 00:17:01.283 15:35:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:01.283 15:35:31 -- rpc/rpc.sh@12 -- # bdevs='[]' 00:17:01.283 15:35:31 -- rpc/rpc.sh@13 -- # jq length 00:17:01.283 15:35:31 -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:17:01.283 15:35:31 -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:17:01.283 15:35:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:01.283 15:35:31 -- common/autotest_common.sh@10 -- # set +x 00:17:01.283 15:35:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:01.283 15:35:31 -- rpc/rpc.sh@15 -- # malloc=Malloc3 00:17:01.283 15:35:31 -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:17:01.283 15:35:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:01.283 15:35:31 -- common/autotest_common.sh@10 -- # set +x 00:17:01.541 15:35:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:01.541 15:35:31 -- rpc/rpc.sh@16 -- # bdevs='[ 00:17:01.541 { 00:17:01.541 "aliases": [ 00:17:01.541 "d5ed0519-9905-4448-8372-b568ce06c956" 00:17:01.541 ], 00:17:01.541 "assigned_rate_limits": { 00:17:01.541 "r_mbytes_per_sec": 0, 00:17:01.541 "rw_ios_per_sec": 0, 00:17:01.541 "rw_mbytes_per_sec": 0, 00:17:01.541 "w_mbytes_per_sec": 0 00:17:01.541 }, 00:17:01.541 "block_size": 512, 00:17:01.541 "claimed": false, 00:17:01.541 "driver_specific": {}, 00:17:01.541 "memory_domains": [ 00:17:01.541 { 00:17:01.541 "dma_device_id": "system", 00:17:01.541 "dma_device_type": 1 00:17:01.541 }, 00:17:01.541 { 00:17:01.541 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:01.541 "dma_device_type": 2 00:17:01.541 } 00:17:01.541 ], 00:17:01.541 "name": "Malloc3", 00:17:01.541 "num_blocks": 16384, 00:17:01.541 "product_name": "Malloc disk", 00:17:01.541 "supported_io_types": { 00:17:01.541 "abort": true, 00:17:01.541 "compare": false, 00:17:01.541 "compare_and_write": false, 00:17:01.541 "flush": true, 00:17:01.541 "nvme_admin": false, 00:17:01.541 "nvme_io": false, 00:17:01.541 "read": true, 00:17:01.541 "reset": true, 00:17:01.541 "unmap": true, 00:17:01.541 "write": true, 00:17:01.541 "write_zeroes": true 00:17:01.541 }, 00:17:01.541 "uuid": "d5ed0519-9905-4448-8372-b568ce06c956", 00:17:01.541 "zoned": false 00:17:01.541 } 00:17:01.541 ]' 00:17:01.541 15:35:31 -- rpc/rpc.sh@17 -- # jq length 00:17:01.541 15:35:31 -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:17:01.541 15:35:31 -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc3 -p Passthru0 00:17:01.541 15:35:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:01.541 15:35:31 -- common/autotest_common.sh@10 -- # set +x 00:17:01.541 [2024-04-26 15:35:31.640030] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:17:01.541 [2024-04-26 15:35:31.640087] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:01.541 [2024-04-26 15:35:31.640107] vbdev_passthru.c: 676:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x156bcd0 00:17:01.541 [2024-04-26 15:35:31.640116] vbdev_passthru.c: 691:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:01.541 [2024-04-26 15:35:31.641774] vbdev_passthru.c: 704:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:01.541 [2024-04-26 15:35:31.641825] vbdev_passthru.c: 705:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:17:01.541 Passthru0 00:17:01.541 15:35:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:01.541 15:35:31 -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:17:01.541 15:35:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:01.541 15:35:31 -- common/autotest_common.sh@10 -- # set +x 00:17:01.541 15:35:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:01.541 15:35:31 -- rpc/rpc.sh@20 -- # bdevs='[ 00:17:01.541 { 00:17:01.541 "aliases": [ 00:17:01.541 "d5ed0519-9905-4448-8372-b568ce06c956" 00:17:01.541 ], 00:17:01.541 "assigned_rate_limits": { 00:17:01.541 "r_mbytes_per_sec": 0, 00:17:01.541 "rw_ios_per_sec": 0, 00:17:01.541 "rw_mbytes_per_sec": 0, 00:17:01.541 "w_mbytes_per_sec": 0 00:17:01.541 }, 00:17:01.541 "block_size": 512, 00:17:01.541 "claim_type": "exclusive_write", 00:17:01.541 "claimed": true, 00:17:01.541 "driver_specific": {}, 00:17:01.541 "memory_domains": [ 00:17:01.541 { 00:17:01.541 "dma_device_id": "system", 00:17:01.541 "dma_device_type": 1 00:17:01.541 }, 00:17:01.541 { 00:17:01.541 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:01.541 "dma_device_type": 2 00:17:01.541 } 00:17:01.541 ], 00:17:01.541 "name": "Malloc3", 00:17:01.541 "num_blocks": 16384, 00:17:01.541 "product_name": "Malloc disk", 00:17:01.541 "supported_io_types": { 00:17:01.541 "abort": true, 00:17:01.541 "compare": false, 00:17:01.541 "compare_and_write": false, 00:17:01.541 "flush": true, 00:17:01.541 "nvme_admin": false, 00:17:01.541 "nvme_io": false, 00:17:01.541 "read": true, 00:17:01.541 "reset": true, 00:17:01.541 "unmap": true, 00:17:01.541 "write": true, 00:17:01.541 "write_zeroes": true 00:17:01.541 }, 00:17:01.541 "uuid": "d5ed0519-9905-4448-8372-b568ce06c956", 00:17:01.541 "zoned": false 00:17:01.541 }, 00:17:01.541 { 00:17:01.541 "aliases": [ 00:17:01.541 "b5b62dac-cf9c-5eef-8120-0dfc5c5d3f94" 00:17:01.541 ], 00:17:01.541 "assigned_rate_limits": { 00:17:01.541 "r_mbytes_per_sec": 0, 00:17:01.541 "rw_ios_per_sec": 0, 00:17:01.541 "rw_mbytes_per_sec": 0, 00:17:01.541 "w_mbytes_per_sec": 0 00:17:01.541 }, 00:17:01.541 "block_size": 512, 00:17:01.541 "claimed": false, 00:17:01.541 "driver_specific": { 00:17:01.541 "passthru": { 00:17:01.541 "base_bdev_name": "Malloc3", 00:17:01.541 "name": "Passthru0" 00:17:01.541 } 00:17:01.541 }, 00:17:01.541 "memory_domains": [ 00:17:01.541 { 00:17:01.541 "dma_device_id": "system", 00:17:01.541 "dma_device_type": 1 00:17:01.541 }, 00:17:01.541 { 00:17:01.541 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:01.541 "dma_device_type": 2 00:17:01.541 } 00:17:01.541 ], 00:17:01.541 "name": "Passthru0", 00:17:01.541 "num_blocks": 16384, 00:17:01.541 "product_name": "passthru", 00:17:01.541 "supported_io_types": { 00:17:01.541 "abort": true, 00:17:01.541 "compare": false, 00:17:01.541 "compare_and_write": false, 00:17:01.541 "flush": true, 00:17:01.541 "nvme_admin": false, 00:17:01.541 "nvme_io": false, 00:17:01.541 "read": true, 00:17:01.541 "reset": true, 00:17:01.541 "unmap": true, 00:17:01.541 "write": true, 00:17:01.541 "write_zeroes": true 00:17:01.541 }, 00:17:01.541 "uuid": "b5b62dac-cf9c-5eef-8120-0dfc5c5d3f94", 00:17:01.541 "zoned": false 00:17:01.541 } 00:17:01.541 ]' 00:17:01.541 15:35:31 -- rpc/rpc.sh@21 -- # jq length 00:17:01.541 15:35:31 -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:17:01.541 15:35:31 -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:17:01.541 15:35:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:01.541 15:35:31 -- common/autotest_common.sh@10 -- # set +x 00:17:01.542 15:35:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:01.542 15:35:31 -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc3 00:17:01.542 15:35:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:01.542 15:35:31 -- common/autotest_common.sh@10 -- # set +x 00:17:01.542 15:35:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:01.542 15:35:31 -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:17:01.542 15:35:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:01.542 15:35:31 -- common/autotest_common.sh@10 -- # set +x 00:17:01.542 15:35:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:01.542 15:35:31 -- rpc/rpc.sh@25 -- # bdevs='[]' 00:17:01.542 15:35:31 -- rpc/rpc.sh@26 -- # jq length 00:17:01.542 15:35:31 -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:17:01.542 00:17:01.542 real 0m0.337s 00:17:01.542 user 0m0.219s 00:17:01.542 sys 0m0.043s 00:17:01.542 15:35:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:01.542 15:35:31 -- common/autotest_common.sh@10 -- # set +x 00:17:01.542 ************************************ 00:17:01.542 END TEST rpc_daemon_integrity 00:17:01.542 ************************************ 00:17:01.800 15:35:31 -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:17:01.800 15:35:31 -- rpc/rpc.sh@84 -- # killprocess 60052 00:17:01.800 15:35:31 -- common/autotest_common.sh@936 -- # '[' -z 60052 ']' 00:17:01.800 15:35:31 -- common/autotest_common.sh@940 -- # kill -0 60052 00:17:01.800 15:35:31 -- common/autotest_common.sh@941 -- # uname 00:17:01.800 15:35:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:01.800 15:35:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60052 00:17:01.800 15:35:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:01.800 15:35:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:01.800 killing process with pid 60052 00:17:01.800 15:35:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60052' 00:17:01.800 15:35:31 -- common/autotest_common.sh@955 -- # kill 60052 00:17:01.800 15:35:31 -- common/autotest_common.sh@960 -- # wait 60052 00:17:02.058 00:17:02.058 real 0m3.527s 00:17:02.058 user 0m4.645s 00:17:02.058 sys 0m0.936s 00:17:02.058 15:35:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:02.058 15:35:32 -- common/autotest_common.sh@10 -- # set +x 00:17:02.058 ************************************ 00:17:02.058 END TEST rpc 00:17:02.058 ************************************ 00:17:02.316 15:35:32 -- spdk/autotest.sh@166 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:17:02.316 15:35:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:02.316 15:35:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:02.316 15:35:32 -- common/autotest_common.sh@10 -- # set +x 00:17:02.316 ************************************ 00:17:02.316 START TEST skip_rpc 00:17:02.316 ************************************ 00:17:02.316 15:35:32 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:17:02.316 * Looking for test storage... 00:17:02.316 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:17:02.316 15:35:32 -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:17:02.316 15:35:32 -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:17:02.316 15:35:32 -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:17:02.316 15:35:32 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:02.316 15:35:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:02.316 15:35:32 -- common/autotest_common.sh@10 -- # set +x 00:17:02.316 ************************************ 00:17:02.316 START TEST skip_rpc 00:17:02.316 ************************************ 00:17:02.316 15:35:32 -- common/autotest_common.sh@1111 -- # test_skip_rpc 00:17:02.316 15:35:32 -- rpc/skip_rpc.sh@16 -- # local spdk_pid=60350 00:17:02.316 15:35:32 -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:17:02.316 15:35:32 -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:17:02.316 15:35:32 -- rpc/skip_rpc.sh@19 -- # sleep 5 00:17:02.574 [2024-04-26 15:35:32.672197] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:17:02.574 [2024-04-26 15:35:32.672305] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60350 ] 00:17:02.574 [2024-04-26 15:35:32.813044] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:02.863 [2024-04-26 15:35:32.931000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:08.134 15:35:37 -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:17:08.134 15:35:37 -- common/autotest_common.sh@638 -- # local es=0 00:17:08.134 15:35:37 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd spdk_get_version 00:17:08.134 15:35:37 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:17:08.134 15:35:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:08.134 15:35:37 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:17:08.134 15:35:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:08.134 15:35:37 -- common/autotest_common.sh@641 -- # rpc_cmd spdk_get_version 00:17:08.134 15:35:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:08.134 15:35:37 -- common/autotest_common.sh@10 -- # set +x 00:17:08.134 2024/04/26 15:35:37 error on client creation, err: error during client creation for Unix socket, err: could not connect to a Unix socket on address /var/tmp/spdk.sock, err: dial unix /var/tmp/spdk.sock: connect: no such file or directory 00:17:08.134 15:35:37 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:17:08.134 15:35:37 -- common/autotest_common.sh@641 -- # es=1 00:17:08.134 15:35:37 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:08.134 15:35:37 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:08.134 15:35:37 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:08.134 15:35:37 -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:17:08.134 15:35:37 -- rpc/skip_rpc.sh@23 -- # killprocess 60350 00:17:08.134 15:35:37 -- common/autotest_common.sh@936 -- # '[' -z 60350 ']' 00:17:08.134 15:35:37 -- common/autotest_common.sh@940 -- # kill -0 60350 00:17:08.134 15:35:37 -- common/autotest_common.sh@941 -- # uname 00:17:08.134 15:35:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:08.134 15:35:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60350 00:17:08.134 killing process with pid 60350 00:17:08.134 15:35:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:08.134 15:35:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:08.134 15:35:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60350' 00:17:08.134 15:35:37 -- common/autotest_common.sh@955 -- # kill 60350 00:17:08.134 15:35:37 -- common/autotest_common.sh@960 -- # wait 60350 00:17:08.134 ************************************ 00:17:08.134 END TEST skip_rpc 00:17:08.134 ************************************ 00:17:08.134 00:17:08.134 real 0m5.490s 00:17:08.134 user 0m5.089s 00:17:08.134 sys 0m0.303s 00:17:08.134 15:35:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:08.134 15:35:38 -- common/autotest_common.sh@10 -- # set +x 00:17:08.134 15:35:38 -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:17:08.134 15:35:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:08.134 15:35:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:08.134 15:35:38 -- common/autotest_common.sh@10 -- # set +x 00:17:08.134 ************************************ 00:17:08.134 START TEST skip_rpc_with_json 00:17:08.134 ************************************ 00:17:08.134 15:35:38 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_json 00:17:08.134 15:35:38 -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:17:08.134 15:35:38 -- rpc/skip_rpc.sh@28 -- # local spdk_pid=60452 00:17:08.134 15:35:38 -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:17:08.134 15:35:38 -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:17:08.134 15:35:38 -- rpc/skip_rpc.sh@31 -- # waitforlisten 60452 00:17:08.134 15:35:38 -- common/autotest_common.sh@817 -- # '[' -z 60452 ']' 00:17:08.134 15:35:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:08.134 15:35:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:08.134 15:35:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:08.134 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:08.134 15:35:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:08.134 15:35:38 -- common/autotest_common.sh@10 -- # set +x 00:17:08.134 [2024-04-26 15:35:38.293294] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:17:08.134 [2024-04-26 15:35:38.293412] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60452 ] 00:17:08.392 [2024-04-26 15:35:38.435374] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:08.392 [2024-04-26 15:35:38.557504] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:09.326 15:35:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:09.326 15:35:39 -- common/autotest_common.sh@850 -- # return 0 00:17:09.326 15:35:39 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:17:09.326 15:35:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:09.326 15:35:39 -- common/autotest_common.sh@10 -- # set +x 00:17:09.326 [2024-04-26 15:35:39.321614] nvmf_rpc.c:2513:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:17:09.326 2024/04/26 15:35:39 error on JSON-RPC call, method: nvmf_get_transports, params: map[trtype:tcp], err: error received for nvmf_get_transports method, err: Code=-19 Msg=No such device 00:17:09.326 request: 00:17:09.326 { 00:17:09.326 "method": "nvmf_get_transports", 00:17:09.326 "params": { 00:17:09.326 "trtype": "tcp" 00:17:09.326 } 00:17:09.326 } 00:17:09.326 Got JSON-RPC error response 00:17:09.326 GoRPCClient: error on JSON-RPC call 00:17:09.326 15:35:39 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:17:09.326 15:35:39 -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:17:09.326 15:35:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:09.326 15:35:39 -- common/autotest_common.sh@10 -- # set +x 00:17:09.326 [2024-04-26 15:35:39.333735] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:09.326 15:35:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:09.326 15:35:39 -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:17:09.326 15:35:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:09.326 15:35:39 -- common/autotest_common.sh@10 -- # set +x 00:17:09.326 15:35:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:09.326 15:35:39 -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:17:09.326 { 00:17:09.326 "subsystems": [ 00:17:09.326 { 00:17:09.326 "subsystem": "keyring", 00:17:09.326 "config": [] 00:17:09.326 }, 00:17:09.326 { 00:17:09.326 "subsystem": "iobuf", 00:17:09.326 "config": [ 00:17:09.326 { 00:17:09.326 "method": "iobuf_set_options", 00:17:09.326 "params": { 00:17:09.326 "large_bufsize": 135168, 00:17:09.326 "large_pool_count": 1024, 00:17:09.326 "small_bufsize": 8192, 00:17:09.326 "small_pool_count": 8192 00:17:09.326 } 00:17:09.326 } 00:17:09.326 ] 00:17:09.326 }, 00:17:09.326 { 00:17:09.326 "subsystem": "sock", 00:17:09.326 "config": [ 00:17:09.326 { 00:17:09.326 "method": "sock_impl_set_options", 00:17:09.326 "params": { 00:17:09.326 "enable_ktls": false, 00:17:09.326 "enable_placement_id": 0, 00:17:09.326 "enable_quickack": false, 00:17:09.326 "enable_recv_pipe": true, 00:17:09.326 "enable_zerocopy_send_client": false, 00:17:09.326 "enable_zerocopy_send_server": true, 00:17:09.326 "impl_name": "posix", 00:17:09.326 "recv_buf_size": 2097152, 00:17:09.326 "send_buf_size": 2097152, 00:17:09.326 "tls_version": 0, 00:17:09.326 "zerocopy_threshold": 0 00:17:09.326 } 00:17:09.326 }, 00:17:09.326 { 00:17:09.326 "method": "sock_impl_set_options", 00:17:09.326 "params": { 00:17:09.326 "enable_ktls": false, 00:17:09.326 "enable_placement_id": 0, 00:17:09.326 "enable_quickack": false, 00:17:09.326 "enable_recv_pipe": true, 00:17:09.326 "enable_zerocopy_send_client": false, 00:17:09.326 "enable_zerocopy_send_server": true, 00:17:09.326 "impl_name": "ssl", 00:17:09.326 "recv_buf_size": 4096, 00:17:09.326 "send_buf_size": 4096, 00:17:09.326 "tls_version": 0, 00:17:09.326 "zerocopy_threshold": 0 00:17:09.326 } 00:17:09.326 } 00:17:09.326 ] 00:17:09.326 }, 00:17:09.326 { 00:17:09.326 "subsystem": "vmd", 00:17:09.326 "config": [] 00:17:09.326 }, 00:17:09.326 { 00:17:09.326 "subsystem": "accel", 00:17:09.326 "config": [ 00:17:09.326 { 00:17:09.326 "method": "accel_set_options", 00:17:09.326 "params": { 00:17:09.326 "buf_count": 2048, 00:17:09.326 "large_cache_size": 16, 00:17:09.326 "sequence_count": 2048, 00:17:09.326 "small_cache_size": 128, 00:17:09.326 "task_count": 2048 00:17:09.326 } 00:17:09.326 } 00:17:09.326 ] 00:17:09.326 }, 00:17:09.326 { 00:17:09.326 "subsystem": "bdev", 00:17:09.326 "config": [ 00:17:09.326 { 00:17:09.326 "method": "bdev_set_options", 00:17:09.326 "params": { 00:17:09.326 "bdev_auto_examine": true, 00:17:09.326 "bdev_io_cache_size": 256, 00:17:09.326 "bdev_io_pool_size": 65535, 00:17:09.326 "iobuf_large_cache_size": 16, 00:17:09.326 "iobuf_small_cache_size": 128 00:17:09.326 } 00:17:09.326 }, 00:17:09.326 { 00:17:09.326 "method": "bdev_raid_set_options", 00:17:09.326 "params": { 00:17:09.326 "process_window_size_kb": 1024 00:17:09.326 } 00:17:09.326 }, 00:17:09.326 { 00:17:09.326 "method": "bdev_iscsi_set_options", 00:17:09.326 "params": { 00:17:09.326 "timeout_sec": 30 00:17:09.326 } 00:17:09.326 }, 00:17:09.326 { 00:17:09.326 "method": "bdev_nvme_set_options", 00:17:09.326 "params": { 00:17:09.326 "action_on_timeout": "none", 00:17:09.326 "allow_accel_sequence": false, 00:17:09.326 "arbitration_burst": 0, 00:17:09.326 "bdev_retry_count": 3, 00:17:09.326 "ctrlr_loss_timeout_sec": 0, 00:17:09.326 "delay_cmd_submit": true, 00:17:09.326 "dhchap_dhgroups": [ 00:17:09.326 "null", 00:17:09.326 "ffdhe2048", 00:17:09.326 "ffdhe3072", 00:17:09.326 "ffdhe4096", 00:17:09.326 "ffdhe6144", 00:17:09.326 "ffdhe8192" 00:17:09.326 ], 00:17:09.326 "dhchap_digests": [ 00:17:09.326 "sha256", 00:17:09.326 "sha384", 00:17:09.326 "sha512" 00:17:09.326 ], 00:17:09.326 "disable_auto_failback": false, 00:17:09.326 "fast_io_fail_timeout_sec": 0, 00:17:09.326 "generate_uuids": false, 00:17:09.326 "high_priority_weight": 0, 00:17:09.326 "io_path_stat": false, 00:17:09.326 "io_queue_requests": 0, 00:17:09.326 "keep_alive_timeout_ms": 10000, 00:17:09.326 "low_priority_weight": 0, 00:17:09.327 "medium_priority_weight": 0, 00:17:09.327 "nvme_adminq_poll_period_us": 10000, 00:17:09.327 "nvme_error_stat": false, 00:17:09.327 "nvme_ioq_poll_period_us": 0, 00:17:09.327 "rdma_cm_event_timeout_ms": 0, 00:17:09.327 "rdma_max_cq_size": 0, 00:17:09.327 "rdma_srq_size": 0, 00:17:09.327 "reconnect_delay_sec": 0, 00:17:09.327 "timeout_admin_us": 0, 00:17:09.327 "timeout_us": 0, 00:17:09.327 "transport_ack_timeout": 0, 00:17:09.327 "transport_retry_count": 4, 00:17:09.327 "transport_tos": 0 00:17:09.327 } 00:17:09.327 }, 00:17:09.327 { 00:17:09.327 "method": "bdev_nvme_set_hotplug", 00:17:09.327 "params": { 00:17:09.327 "enable": false, 00:17:09.327 "period_us": 100000 00:17:09.327 } 00:17:09.327 }, 00:17:09.327 { 00:17:09.327 "method": "bdev_wait_for_examine" 00:17:09.327 } 00:17:09.327 ] 00:17:09.327 }, 00:17:09.327 { 00:17:09.327 "subsystem": "scsi", 00:17:09.327 "config": null 00:17:09.327 }, 00:17:09.327 { 00:17:09.327 "subsystem": "scheduler", 00:17:09.327 "config": [ 00:17:09.327 { 00:17:09.327 "method": "framework_set_scheduler", 00:17:09.327 "params": { 00:17:09.327 "name": "static" 00:17:09.327 } 00:17:09.327 } 00:17:09.327 ] 00:17:09.327 }, 00:17:09.327 { 00:17:09.327 "subsystem": "vhost_scsi", 00:17:09.327 "config": [] 00:17:09.327 }, 00:17:09.327 { 00:17:09.327 "subsystem": "vhost_blk", 00:17:09.327 "config": [] 00:17:09.327 }, 00:17:09.327 { 00:17:09.327 "subsystem": "ublk", 00:17:09.327 "config": [] 00:17:09.327 }, 00:17:09.327 { 00:17:09.327 "subsystem": "nbd", 00:17:09.327 "config": [] 00:17:09.327 }, 00:17:09.327 { 00:17:09.327 "subsystem": "nvmf", 00:17:09.327 "config": [ 00:17:09.327 { 00:17:09.327 "method": "nvmf_set_config", 00:17:09.327 "params": { 00:17:09.327 "admin_cmd_passthru": { 00:17:09.327 "identify_ctrlr": false 00:17:09.327 }, 00:17:09.327 "discovery_filter": "match_any" 00:17:09.327 } 00:17:09.327 }, 00:17:09.327 { 00:17:09.327 "method": "nvmf_set_max_subsystems", 00:17:09.327 "params": { 00:17:09.327 "max_subsystems": 1024 00:17:09.327 } 00:17:09.327 }, 00:17:09.327 { 00:17:09.327 "method": "nvmf_set_crdt", 00:17:09.327 "params": { 00:17:09.327 "crdt1": 0, 00:17:09.327 "crdt2": 0, 00:17:09.327 "crdt3": 0 00:17:09.327 } 00:17:09.327 }, 00:17:09.327 { 00:17:09.327 "method": "nvmf_create_transport", 00:17:09.327 "params": { 00:17:09.327 "abort_timeout_sec": 1, 00:17:09.327 "ack_timeout": 0, 00:17:09.327 "buf_cache_size": 4294967295, 00:17:09.327 "c2h_success": true, 00:17:09.327 "data_wr_pool_size": 0, 00:17:09.327 "dif_insert_or_strip": false, 00:17:09.327 "in_capsule_data_size": 4096, 00:17:09.327 "io_unit_size": 131072, 00:17:09.327 "max_aq_depth": 128, 00:17:09.327 "max_io_qpairs_per_ctrlr": 127, 00:17:09.327 "max_io_size": 131072, 00:17:09.327 "max_queue_depth": 128, 00:17:09.327 "num_shared_buffers": 511, 00:17:09.327 "sock_priority": 0, 00:17:09.327 "trtype": "TCP", 00:17:09.327 "zcopy": false 00:17:09.327 } 00:17:09.327 } 00:17:09.327 ] 00:17:09.327 }, 00:17:09.327 { 00:17:09.327 "subsystem": "iscsi", 00:17:09.327 "config": [ 00:17:09.327 { 00:17:09.327 "method": "iscsi_set_options", 00:17:09.327 "params": { 00:17:09.327 "allow_duplicated_isid": false, 00:17:09.327 "chap_group": 0, 00:17:09.327 "data_out_pool_size": 2048, 00:17:09.327 "default_time2retain": 20, 00:17:09.327 "default_time2wait": 2, 00:17:09.327 "disable_chap": false, 00:17:09.327 "error_recovery_level": 0, 00:17:09.327 "first_burst_length": 8192, 00:17:09.327 "immediate_data": true, 00:17:09.327 "immediate_data_pool_size": 16384, 00:17:09.327 "max_connections_per_session": 2, 00:17:09.327 "max_large_datain_per_connection": 64, 00:17:09.327 "max_queue_depth": 64, 00:17:09.327 "max_r2t_per_connection": 4, 00:17:09.327 "max_sessions": 128, 00:17:09.327 "mutual_chap": false, 00:17:09.327 "node_base": "iqn.2016-06.io.spdk", 00:17:09.327 "nop_in_interval": 30, 00:17:09.327 "nop_timeout": 60, 00:17:09.327 "pdu_pool_size": 36864, 00:17:09.327 "require_chap": false 00:17:09.327 } 00:17:09.327 } 00:17:09.327 ] 00:17:09.327 } 00:17:09.327 ] 00:17:09.327 } 00:17:09.327 15:35:39 -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:17:09.327 15:35:39 -- rpc/skip_rpc.sh@40 -- # killprocess 60452 00:17:09.327 15:35:39 -- common/autotest_common.sh@936 -- # '[' -z 60452 ']' 00:17:09.327 15:35:39 -- common/autotest_common.sh@940 -- # kill -0 60452 00:17:09.327 15:35:39 -- common/autotest_common.sh@941 -- # uname 00:17:09.327 15:35:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:09.327 15:35:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60452 00:17:09.327 killing process with pid 60452 00:17:09.327 15:35:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:09.327 15:35:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:09.327 15:35:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60452' 00:17:09.327 15:35:39 -- common/autotest_common.sh@955 -- # kill 60452 00:17:09.327 15:35:39 -- common/autotest_common.sh@960 -- # wait 60452 00:17:09.892 15:35:39 -- rpc/skip_rpc.sh@47 -- # local spdk_pid=60486 00:17:09.892 15:35:39 -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:17:09.892 15:35:39 -- rpc/skip_rpc.sh@48 -- # sleep 5 00:17:15.165 15:35:44 -- rpc/skip_rpc.sh@50 -- # killprocess 60486 00:17:15.165 15:35:44 -- common/autotest_common.sh@936 -- # '[' -z 60486 ']' 00:17:15.165 15:35:44 -- common/autotest_common.sh@940 -- # kill -0 60486 00:17:15.165 15:35:44 -- common/autotest_common.sh@941 -- # uname 00:17:15.165 15:35:44 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:15.165 15:35:44 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60486 00:17:15.165 killing process with pid 60486 00:17:15.165 15:35:45 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:15.165 15:35:45 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:15.165 15:35:45 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60486' 00:17:15.165 15:35:45 -- common/autotest_common.sh@955 -- # kill 60486 00:17:15.165 15:35:45 -- common/autotest_common.sh@960 -- # wait 60486 00:17:15.165 15:35:45 -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:17:15.165 15:35:45 -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:17:15.165 00:17:15.165 real 0m7.213s 00:17:15.165 user 0m6.962s 00:17:15.165 sys 0m0.689s 00:17:15.165 15:35:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:15.165 15:35:45 -- common/autotest_common.sh@10 -- # set +x 00:17:15.165 ************************************ 00:17:15.165 END TEST skip_rpc_with_json 00:17:15.165 ************************************ 00:17:15.433 15:35:45 -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:17:15.433 15:35:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:15.433 15:35:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:15.433 15:35:45 -- common/autotest_common.sh@10 -- # set +x 00:17:15.433 ************************************ 00:17:15.433 START TEST skip_rpc_with_delay 00:17:15.433 ************************************ 00:17:15.433 15:35:45 -- common/autotest_common.sh@1111 -- # test_skip_rpc_with_delay 00:17:15.433 15:35:45 -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:17:15.433 15:35:45 -- common/autotest_common.sh@638 -- # local es=0 00:17:15.433 15:35:45 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:17:15.433 15:35:45 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:15.433 15:35:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:15.433 15:35:45 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:15.433 15:35:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:15.433 15:35:45 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:15.433 15:35:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:15.433 15:35:45 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:15.433 15:35:45 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:17:15.433 15:35:45 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:17:15.433 [2024-04-26 15:35:45.627964] app.c: 751:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:17:15.433 [2024-04-26 15:35:45.628185] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:17:15.433 15:35:45 -- common/autotest_common.sh@641 -- # es=1 00:17:15.433 15:35:45 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:15.433 15:35:45 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:17:15.433 15:35:45 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:15.433 00:17:15.433 real 0m0.109s 00:17:15.433 user 0m0.067s 00:17:15.433 sys 0m0.040s 00:17:15.433 15:35:45 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:15.433 15:35:45 -- common/autotest_common.sh@10 -- # set +x 00:17:15.433 ************************************ 00:17:15.433 END TEST skip_rpc_with_delay 00:17:15.433 ************************************ 00:17:15.433 15:35:45 -- rpc/skip_rpc.sh@77 -- # uname 00:17:15.433 15:35:45 -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:17:15.433 15:35:45 -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:17:15.433 15:35:45 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:15.433 15:35:45 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:15.433 15:35:45 -- common/autotest_common.sh@10 -- # set +x 00:17:15.732 ************************************ 00:17:15.732 START TEST exit_on_failed_rpc_init 00:17:15.732 ************************************ 00:17:15.732 15:35:45 -- common/autotest_common.sh@1111 -- # test_exit_on_failed_rpc_init 00:17:15.732 15:35:45 -- rpc/skip_rpc.sh@62 -- # local spdk_pid=60610 00:17:15.732 15:35:45 -- rpc/skip_rpc.sh@63 -- # waitforlisten 60610 00:17:15.732 15:35:45 -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:17:15.732 15:35:45 -- common/autotest_common.sh@817 -- # '[' -z 60610 ']' 00:17:15.732 15:35:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:15.732 15:35:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:15.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:15.732 15:35:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:15.732 15:35:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:15.732 15:35:45 -- common/autotest_common.sh@10 -- # set +x 00:17:15.732 [2024-04-26 15:35:45.841290] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:17:15.732 [2024-04-26 15:35:45.841387] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60610 ] 00:17:15.732 [2024-04-26 15:35:45.975917] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:15.990 [2024-04-26 15:35:46.092546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:16.554 15:35:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:16.554 15:35:46 -- common/autotest_common.sh@850 -- # return 0 00:17:16.554 15:35:46 -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:17:16.554 15:35:46 -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:17:16.812 15:35:46 -- common/autotest_common.sh@638 -- # local es=0 00:17:16.812 15:35:46 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:17:16.812 15:35:46 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:16.812 15:35:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:16.812 15:35:46 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:16.812 15:35:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:16.812 15:35:46 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:16.812 15:35:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:17:16.812 15:35:46 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:16.812 15:35:46 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:17:16.812 15:35:46 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:17:16.812 [2024-04-26 15:35:46.917746] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:17:16.812 [2024-04-26 15:35:46.917856] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60640 ] 00:17:16.812 [2024-04-26 15:35:47.057528] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:17.070 [2024-04-26 15:35:47.180425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:17.070 [2024-04-26 15:35:47.180524] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:17:17.070 [2024-04-26 15:35:47.180539] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:17:17.070 [2024-04-26 15:35:47.180548] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:17:17.070 15:35:47 -- common/autotest_common.sh@641 -- # es=234 00:17:17.070 15:35:47 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:17:17.070 15:35:47 -- common/autotest_common.sh@650 -- # es=106 00:17:17.070 15:35:47 -- common/autotest_common.sh@651 -- # case "$es" in 00:17:17.070 15:35:47 -- common/autotest_common.sh@658 -- # es=1 00:17:17.070 15:35:47 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:17:17.070 15:35:47 -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:17:17.070 15:35:47 -- rpc/skip_rpc.sh@70 -- # killprocess 60610 00:17:17.070 15:35:47 -- common/autotest_common.sh@936 -- # '[' -z 60610 ']' 00:17:17.070 15:35:47 -- common/autotest_common.sh@940 -- # kill -0 60610 00:17:17.070 15:35:47 -- common/autotest_common.sh@941 -- # uname 00:17:17.070 15:35:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:17.070 15:35:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 60610 00:17:17.070 15:35:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:17.070 killing process with pid 60610 00:17:17.070 15:35:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:17.070 15:35:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 60610' 00:17:17.070 15:35:47 -- common/autotest_common.sh@955 -- # kill 60610 00:17:17.070 15:35:47 -- common/autotest_common.sh@960 -- # wait 60610 00:17:17.671 00:17:17.671 real 0m1.963s 00:17:17.671 user 0m2.361s 00:17:17.671 sys 0m0.414s 00:17:17.671 15:35:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:17.671 15:35:47 -- common/autotest_common.sh@10 -- # set +x 00:17:17.671 ************************************ 00:17:17.671 END TEST exit_on_failed_rpc_init 00:17:17.671 ************************************ 00:17:17.671 15:35:47 -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:17:17.671 00:17:17.671 real 0m15.353s 00:17:17.671 user 0m14.674s 00:17:17.671 sys 0m1.762s 00:17:17.671 15:35:47 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:17.671 ************************************ 00:17:17.671 15:35:47 -- common/autotest_common.sh@10 -- # set +x 00:17:17.671 END TEST skip_rpc 00:17:17.671 ************************************ 00:17:17.671 15:35:47 -- spdk/autotest.sh@167 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:17:17.671 15:35:47 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:17.671 15:35:47 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:17.671 15:35:47 -- common/autotest_common.sh@10 -- # set +x 00:17:17.671 ************************************ 00:17:17.671 START TEST rpc_client 00:17:17.671 ************************************ 00:17:17.671 15:35:47 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:17:17.930 * Looking for test storage... 00:17:17.930 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:17:17.930 15:35:47 -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:17:17.930 OK 00:17:17.930 15:35:48 -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:17:17.930 00:17:17.930 real 0m0.106s 00:17:17.930 user 0m0.051s 00:17:17.930 sys 0m0.057s 00:17:17.930 15:35:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:17.930 15:35:48 -- common/autotest_common.sh@10 -- # set +x 00:17:17.930 ************************************ 00:17:17.930 END TEST rpc_client 00:17:17.930 ************************************ 00:17:17.930 15:35:48 -- spdk/autotest.sh@168 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:17:17.930 15:35:48 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:17.930 15:35:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:17.930 15:35:48 -- common/autotest_common.sh@10 -- # set +x 00:17:17.930 ************************************ 00:17:17.930 START TEST json_config 00:17:17.930 ************************************ 00:17:17.930 15:35:48 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:17:17.930 15:35:48 -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:17.930 15:35:48 -- nvmf/common.sh@7 -- # uname -s 00:17:17.930 15:35:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:17.930 15:35:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:17.930 15:35:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:17.930 15:35:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:17.930 15:35:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:17.930 15:35:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:17.930 15:35:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:17.930 15:35:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:17.930 15:35:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:17.930 15:35:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:17.930 15:35:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:17:17.930 15:35:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:17:17.930 15:35:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:17.930 15:35:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:17.930 15:35:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:17:17.930 15:35:48 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:17.930 15:35:48 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:17.930 15:35:48 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:17.930 15:35:48 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:17.930 15:35:48 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:17.930 15:35:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.930 15:35:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.930 15:35:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.930 15:35:48 -- paths/export.sh@5 -- # export PATH 00:17:17.930 15:35:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:17.930 15:35:48 -- nvmf/common.sh@47 -- # : 0 00:17:17.930 15:35:48 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:17.930 15:35:48 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:17.930 15:35:48 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:17.930 15:35:48 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:17.930 15:35:48 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:17.930 15:35:48 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:17.930 15:35:48 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:17.930 15:35:48 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:17.930 15:35:48 -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:17:17.930 15:35:48 -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:17:17.930 15:35:48 -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:17:17.930 15:35:48 -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:17:17.930 15:35:48 -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:17:17.930 15:35:48 -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:17:17.930 15:35:48 -- json_config/json_config.sh@31 -- # declare -A app_pid 00:17:17.930 15:35:48 -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:17:17.930 15:35:48 -- json_config/json_config.sh@32 -- # declare -A app_socket 00:17:17.930 15:35:48 -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:17:17.930 15:35:48 -- json_config/json_config.sh@33 -- # declare -A app_params 00:17:17.930 15:35:48 -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:17:17.930 15:35:48 -- json_config/json_config.sh@34 -- # declare -A configs_path 00:17:17.930 15:35:48 -- json_config/json_config.sh@40 -- # last_event_id=0 00:17:17.930 15:35:48 -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:17:17.930 INFO: JSON configuration test init 00:17:17.930 15:35:48 -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:17:17.930 15:35:48 -- json_config/json_config.sh@357 -- # json_config_test_init 00:17:17.930 15:35:48 -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:17:17.930 15:35:48 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:17.930 15:35:48 -- common/autotest_common.sh@10 -- # set +x 00:17:17.930 15:35:48 -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:17:17.930 15:35:48 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:17.930 15:35:48 -- common/autotest_common.sh@10 -- # set +x 00:17:17.930 15:35:48 -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:17:17.930 15:35:48 -- json_config/common.sh@9 -- # local app=target 00:17:17.930 15:35:48 -- json_config/common.sh@10 -- # shift 00:17:17.930 15:35:48 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:17:17.930 15:35:48 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:17:17.930 15:35:48 -- json_config/common.sh@15 -- # local app_extra_params= 00:17:17.930 15:35:48 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:17:17.930 15:35:48 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:17:17.930 15:35:48 -- json_config/common.sh@22 -- # app_pid["$app"]=60768 00:17:17.930 Waiting for target to run... 00:17:17.930 15:35:48 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:17:17.930 15:35:48 -- json_config/common.sh@25 -- # waitforlisten 60768 /var/tmp/spdk_tgt.sock 00:17:17.930 15:35:48 -- common/autotest_common.sh@817 -- # '[' -z 60768 ']' 00:17:17.930 15:35:48 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:17:17.930 15:35:48 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:17:17.930 15:35:48 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:17.930 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:17:17.930 15:35:48 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:17:17.930 15:35:48 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:17.930 15:35:48 -- common/autotest_common.sh@10 -- # set +x 00:17:18.188 [2024-04-26 15:35:48.284081] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:17:18.188 [2024-04-26 15:35:48.284196] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60768 ] 00:17:18.446 [2024-04-26 15:35:48.724184] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:18.704 [2024-04-26 15:35:48.828245] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:19.271 15:35:49 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:19.271 15:35:49 -- common/autotest_common.sh@850 -- # return 0 00:17:19.271 00:17:19.271 15:35:49 -- json_config/common.sh@26 -- # echo '' 00:17:19.271 15:35:49 -- json_config/json_config.sh@269 -- # create_accel_config 00:17:19.271 15:35:49 -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:17:19.271 15:35:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:19.271 15:35:49 -- common/autotest_common.sh@10 -- # set +x 00:17:19.271 15:35:49 -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:17:19.271 15:35:49 -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:17:19.271 15:35:49 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:19.271 15:35:49 -- common/autotest_common.sh@10 -- # set +x 00:17:19.271 15:35:49 -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:17:19.271 15:35:49 -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:17:19.271 15:35:49 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:17:19.528 15:35:49 -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:17:19.528 15:35:49 -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:17:19.528 15:35:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:19.528 15:35:49 -- common/autotest_common.sh@10 -- # set +x 00:17:19.528 15:35:49 -- json_config/json_config.sh@45 -- # local ret=0 00:17:19.528 15:35:49 -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:17:19.528 15:35:49 -- json_config/json_config.sh@46 -- # local enabled_types 00:17:19.787 15:35:49 -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:17:19.787 15:35:49 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:17:19.787 15:35:49 -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:17:20.045 15:35:50 -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:17:20.045 15:35:50 -- json_config/json_config.sh@48 -- # local get_types 00:17:20.045 15:35:50 -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:17:20.045 15:35:50 -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:17:20.045 15:35:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:20.045 15:35:50 -- common/autotest_common.sh@10 -- # set +x 00:17:20.045 15:35:50 -- json_config/json_config.sh@55 -- # return 0 00:17:20.045 15:35:50 -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:17:20.045 15:35:50 -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:17:20.045 15:35:50 -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:17:20.045 15:35:50 -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:17:20.045 15:35:50 -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:17:20.045 15:35:50 -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:17:20.045 15:35:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:20.045 15:35:50 -- common/autotest_common.sh@10 -- # set +x 00:17:20.045 15:35:50 -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:17:20.045 15:35:50 -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:17:20.045 15:35:50 -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:17:20.045 15:35:50 -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:17:20.045 15:35:50 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:17:20.303 MallocForNvmf0 00:17:20.303 15:35:50 -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:17:20.303 15:35:50 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:17:20.561 MallocForNvmf1 00:17:20.561 15:35:50 -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:17:20.561 15:35:50 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:17:20.818 [2024-04-26 15:35:50.895448] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:20.819 15:35:50 -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:20.819 15:35:50 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:21.076 15:35:51 -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:17:21.076 15:35:51 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:17:21.334 15:35:51 -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:17:21.334 15:35:51 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:17:21.592 15:35:51 -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:17:21.592 15:35:51 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:17:21.850 [2024-04-26 15:35:52.140065] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:17:22.110 15:35:52 -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:17:22.110 15:35:52 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:22.110 15:35:52 -- common/autotest_common.sh@10 -- # set +x 00:17:22.110 15:35:52 -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:17:22.110 15:35:52 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:22.110 15:35:52 -- common/autotest_common.sh@10 -- # set +x 00:17:22.110 15:35:52 -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:17:22.110 15:35:52 -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:17:22.110 15:35:52 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:17:22.368 MallocBdevForConfigChangeCheck 00:17:22.369 15:35:52 -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:17:22.369 15:35:52 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:22.369 15:35:52 -- common/autotest_common.sh@10 -- # set +x 00:17:22.369 15:35:52 -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:17:22.369 15:35:52 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:17:22.627 INFO: shutting down applications... 00:17:22.627 15:35:52 -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:17:22.627 15:35:52 -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:17:22.627 15:35:52 -- json_config/json_config.sh@368 -- # json_config_clear target 00:17:22.627 15:35:52 -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:17:22.627 15:35:52 -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:17:23.192 Calling clear_iscsi_subsystem 00:17:23.192 Calling clear_nvmf_subsystem 00:17:23.192 Calling clear_nbd_subsystem 00:17:23.192 Calling clear_ublk_subsystem 00:17:23.192 Calling clear_vhost_blk_subsystem 00:17:23.192 Calling clear_vhost_scsi_subsystem 00:17:23.192 Calling clear_bdev_subsystem 00:17:23.192 15:35:53 -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:17:23.192 15:35:53 -- json_config/json_config.sh@343 -- # count=100 00:17:23.192 15:35:53 -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:17:23.192 15:35:53 -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:17:23.192 15:35:53 -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:17:23.192 15:35:53 -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:17:23.449 15:35:53 -- json_config/json_config.sh@345 -- # break 00:17:23.449 15:35:53 -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:17:23.449 15:35:53 -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:17:23.449 15:35:53 -- json_config/common.sh@31 -- # local app=target 00:17:23.449 15:35:53 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:17:23.449 15:35:53 -- json_config/common.sh@35 -- # [[ -n 60768 ]] 00:17:23.449 15:35:53 -- json_config/common.sh@38 -- # kill -SIGINT 60768 00:17:23.449 15:35:53 -- json_config/common.sh@40 -- # (( i = 0 )) 00:17:23.449 15:35:53 -- json_config/common.sh@40 -- # (( i < 30 )) 00:17:23.449 15:35:53 -- json_config/common.sh@41 -- # kill -0 60768 00:17:23.449 15:35:53 -- json_config/common.sh@45 -- # sleep 0.5 00:17:24.014 15:35:54 -- json_config/common.sh@40 -- # (( i++ )) 00:17:24.014 15:35:54 -- json_config/common.sh@40 -- # (( i < 30 )) 00:17:24.014 15:35:54 -- json_config/common.sh@41 -- # kill -0 60768 00:17:24.014 15:35:54 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:17:24.014 15:35:54 -- json_config/common.sh@43 -- # break 00:17:24.014 15:35:54 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:17:24.014 SPDK target shutdown done 00:17:24.014 15:35:54 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:17:24.014 INFO: relaunching applications... 00:17:24.014 15:35:54 -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:17:24.014 15:35:54 -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:17:24.014 15:35:54 -- json_config/common.sh@9 -- # local app=target 00:17:24.014 15:35:54 -- json_config/common.sh@10 -- # shift 00:17:24.014 15:35:54 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:17:24.014 15:35:54 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:17:24.014 15:35:54 -- json_config/common.sh@15 -- # local app_extra_params= 00:17:24.014 15:35:54 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:17:24.014 15:35:54 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:17:24.014 15:35:54 -- json_config/common.sh@22 -- # app_pid["$app"]=61048 00:17:24.014 Waiting for target to run... 00:17:24.015 15:35:54 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:17:24.015 15:35:54 -- json_config/common.sh@25 -- # waitforlisten 61048 /var/tmp/spdk_tgt.sock 00:17:24.015 15:35:54 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:17:24.015 15:35:54 -- common/autotest_common.sh@817 -- # '[' -z 61048 ']' 00:17:24.015 15:35:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:17:24.015 15:35:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:24.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:17:24.015 15:35:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:17:24.015 15:35:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:24.015 15:35:54 -- common/autotest_common.sh@10 -- # set +x 00:17:24.015 [2024-04-26 15:35:54.194052] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:17:24.015 [2024-04-26 15:35:54.194198] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61048 ] 00:17:24.580 [2024-04-26 15:35:54.617229] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:24.580 [2024-04-26 15:35:54.722259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:24.838 [2024-04-26 15:35:55.040809] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:24.838 [2024-04-26 15:35:55.072907] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:17:25.097 15:35:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:25.097 00:17:25.097 15:35:55 -- common/autotest_common.sh@850 -- # return 0 00:17:25.097 15:35:55 -- json_config/common.sh@26 -- # echo '' 00:17:25.097 15:35:55 -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:17:25.097 INFO: Checking if target configuration is the same... 00:17:25.097 15:35:55 -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:17:25.097 15:35:55 -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:17:25.097 15:35:55 -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:17:25.097 15:35:55 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:17:25.097 + '[' 2 -ne 2 ']' 00:17:25.097 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:17:25.097 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:17:25.097 + rootdir=/home/vagrant/spdk_repo/spdk 00:17:25.097 +++ basename /dev/fd/62 00:17:25.097 ++ mktemp /tmp/62.XXX 00:17:25.097 + tmp_file_1=/tmp/62.QNN 00:17:25.097 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:17:25.097 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:17:25.097 + tmp_file_2=/tmp/spdk_tgt_config.json.XiO 00:17:25.097 + ret=0 00:17:25.097 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:17:25.355 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:17:25.355 + diff -u /tmp/62.QNN /tmp/spdk_tgt_config.json.XiO 00:17:25.613 INFO: JSON config files are the same 00:17:25.613 + echo 'INFO: JSON config files are the same' 00:17:25.613 + rm /tmp/62.QNN /tmp/spdk_tgt_config.json.XiO 00:17:25.613 + exit 0 00:17:25.613 15:35:55 -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:17:25.613 INFO: changing configuration and checking if this can be detected... 00:17:25.613 15:35:55 -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:17:25.613 15:35:55 -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:17:25.613 15:35:55 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:17:25.870 15:35:55 -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /dev/fd/62 /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:17:25.870 15:35:55 -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:17:25.870 15:35:55 -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:17:25.870 + '[' 2 -ne 2 ']' 00:17:25.870 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:17:25.870 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:17:25.870 + rootdir=/home/vagrant/spdk_repo/spdk 00:17:25.870 +++ basename /dev/fd/62 00:17:25.870 ++ mktemp /tmp/62.XXX 00:17:25.870 + tmp_file_1=/tmp/62.WP3 00:17:25.870 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:17:25.871 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:17:25.871 + tmp_file_2=/tmp/spdk_tgt_config.json.juR 00:17:25.871 + ret=0 00:17:25.871 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:17:26.128 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:17:26.128 + diff -u /tmp/62.WP3 /tmp/spdk_tgt_config.json.juR 00:17:26.128 + ret=1 00:17:26.128 + echo '=== Start of file: /tmp/62.WP3 ===' 00:17:26.128 + cat /tmp/62.WP3 00:17:26.128 + echo '=== End of file: /tmp/62.WP3 ===' 00:17:26.128 + echo '' 00:17:26.128 + echo '=== Start of file: /tmp/spdk_tgt_config.json.juR ===' 00:17:26.128 + cat /tmp/spdk_tgt_config.json.juR 00:17:26.128 + echo '=== End of file: /tmp/spdk_tgt_config.json.juR ===' 00:17:26.128 + echo '' 00:17:26.128 + rm /tmp/62.WP3 /tmp/spdk_tgt_config.json.juR 00:17:26.128 + exit 1 00:17:26.128 INFO: configuration change detected. 00:17:26.128 15:35:56 -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:17:26.128 15:35:56 -- json_config/json_config.sh@394 -- # json_config_test_fini 00:17:26.128 15:35:56 -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:17:26.128 15:35:56 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:26.128 15:35:56 -- common/autotest_common.sh@10 -- # set +x 00:17:26.128 15:35:56 -- json_config/json_config.sh@307 -- # local ret=0 00:17:26.128 15:35:56 -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:17:26.128 15:35:56 -- json_config/json_config.sh@317 -- # [[ -n 61048 ]] 00:17:26.128 15:35:56 -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:17:26.128 15:35:56 -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:17:26.128 15:35:56 -- common/autotest_common.sh@710 -- # xtrace_disable 00:17:26.128 15:35:56 -- common/autotest_common.sh@10 -- # set +x 00:17:26.128 15:35:56 -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:17:26.128 15:35:56 -- json_config/json_config.sh@193 -- # uname -s 00:17:26.128 15:35:56 -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:17:26.128 15:35:56 -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:17:26.128 15:35:56 -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:17:26.128 15:35:56 -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:17:26.128 15:35:56 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:26.128 15:35:56 -- common/autotest_common.sh@10 -- # set +x 00:17:26.385 15:35:56 -- json_config/json_config.sh@323 -- # killprocess 61048 00:17:26.385 15:35:56 -- common/autotest_common.sh@936 -- # '[' -z 61048 ']' 00:17:26.385 15:35:56 -- common/autotest_common.sh@940 -- # kill -0 61048 00:17:26.385 15:35:56 -- common/autotest_common.sh@941 -- # uname 00:17:26.385 15:35:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:26.385 15:35:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61048 00:17:26.385 15:35:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:26.385 killing process with pid 61048 00:17:26.385 15:35:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:26.385 15:35:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61048' 00:17:26.385 15:35:56 -- common/autotest_common.sh@955 -- # kill 61048 00:17:26.385 15:35:56 -- common/autotest_common.sh@960 -- # wait 61048 00:17:26.644 15:35:56 -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:17:26.644 15:35:56 -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:17:26.644 15:35:56 -- common/autotest_common.sh@716 -- # xtrace_disable 00:17:26.644 15:35:56 -- common/autotest_common.sh@10 -- # set +x 00:17:26.644 15:35:56 -- json_config/json_config.sh@328 -- # return 0 00:17:26.644 15:35:56 -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:17:26.644 INFO: Success 00:17:26.644 00:17:26.644 real 0m8.657s 00:17:26.644 user 0m12.504s 00:17:26.644 sys 0m1.879s 00:17:26.644 15:35:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:26.644 15:35:56 -- common/autotest_common.sh@10 -- # set +x 00:17:26.644 ************************************ 00:17:26.644 END TEST json_config 00:17:26.644 ************************************ 00:17:26.644 15:35:56 -- spdk/autotest.sh@169 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:17:26.644 15:35:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:26.644 15:35:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:26.644 15:35:56 -- common/autotest_common.sh@10 -- # set +x 00:17:26.644 ************************************ 00:17:26.644 START TEST json_config_extra_key 00:17:26.644 ************************************ 00:17:26.644 15:35:56 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:17:26.644 15:35:56 -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:17:26.644 15:35:56 -- nvmf/common.sh@7 -- # uname -s 00:17:26.644 15:35:56 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:26.644 15:35:56 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:26.644 15:35:56 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:26.644 15:35:56 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:26.644 15:35:56 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:26.644 15:35:56 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:26.644 15:35:56 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:26.644 15:35:56 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:26.644 15:35:56 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:26.902 15:35:56 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:26.902 15:35:56 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:17:26.902 15:35:56 -- nvmf/common.sh@18 -- # NVME_HOSTID=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:17:26.902 15:35:56 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:26.902 15:35:56 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:26.902 15:35:56 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:17:26.902 15:35:56 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:26.902 15:35:56 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:17:26.902 15:35:56 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:26.902 15:35:56 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:26.902 15:35:56 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:26.902 15:35:56 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.903 15:35:56 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.903 15:35:56 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.903 15:35:56 -- paths/export.sh@5 -- # export PATH 00:17:26.903 15:35:56 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.903 15:35:56 -- nvmf/common.sh@47 -- # : 0 00:17:26.903 15:35:56 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:26.903 15:35:56 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:26.903 15:35:56 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:26.903 15:35:56 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:26.903 15:35:56 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:26.903 15:35:56 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:26.903 15:35:56 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:26.903 15:35:56 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:26.903 15:35:56 -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:17:26.903 15:35:56 -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:17:26.903 15:35:56 -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:17:26.903 15:35:56 -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:17:26.903 15:35:56 -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:17:26.903 15:35:56 -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:17:26.903 15:35:56 -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:17:26.903 15:35:56 -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:17:26.903 15:35:56 -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:17:26.903 15:35:56 -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:17:26.903 INFO: launching applications... 00:17:26.903 15:35:56 -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:17:26.903 15:35:56 -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:17:26.903 15:35:56 -- json_config/common.sh@9 -- # local app=target 00:17:26.903 15:35:56 -- json_config/common.sh@10 -- # shift 00:17:26.903 15:35:56 -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:17:26.903 15:35:56 -- json_config/common.sh@13 -- # [[ -z '' ]] 00:17:26.903 15:35:56 -- json_config/common.sh@15 -- # local app_extra_params= 00:17:26.903 15:35:56 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:17:26.903 15:35:56 -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:17:26.903 15:35:56 -- json_config/common.sh@22 -- # app_pid["$app"]=61229 00:17:26.903 15:35:56 -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:17:26.903 Waiting for target to run... 00:17:26.903 15:35:56 -- json_config/common.sh@25 -- # waitforlisten 61229 /var/tmp/spdk_tgt.sock 00:17:26.903 15:35:56 -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:17:26.903 15:35:56 -- common/autotest_common.sh@817 -- # '[' -z 61229 ']' 00:17:26.903 15:35:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:17:26.903 15:35:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:26.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:17:26.903 15:35:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:17:26.903 15:35:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:26.903 15:35:56 -- common/autotest_common.sh@10 -- # set +x 00:17:26.903 [2024-04-26 15:35:57.007257] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:17:26.903 [2024-04-26 15:35:57.007366] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61229 ] 00:17:27.161 [2024-04-26 15:35:57.411463] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.419 [2024-04-26 15:35:57.505511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:27.677 15:35:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:27.677 15:35:57 -- common/autotest_common.sh@850 -- # return 0 00:17:27.935 15:35:57 -- json_config/common.sh@26 -- # echo '' 00:17:27.935 00:17:27.935 INFO: shutting down applications... 00:17:27.935 15:35:57 -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:17:27.935 15:35:57 -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:17:27.935 15:35:57 -- json_config/common.sh@31 -- # local app=target 00:17:27.935 15:35:57 -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:17:27.935 15:35:57 -- json_config/common.sh@35 -- # [[ -n 61229 ]] 00:17:27.935 15:35:57 -- json_config/common.sh@38 -- # kill -SIGINT 61229 00:17:27.935 15:35:57 -- json_config/common.sh@40 -- # (( i = 0 )) 00:17:27.935 15:35:57 -- json_config/common.sh@40 -- # (( i < 30 )) 00:17:27.935 15:35:57 -- json_config/common.sh@41 -- # kill -0 61229 00:17:27.935 15:35:57 -- json_config/common.sh@45 -- # sleep 0.5 00:17:28.193 15:35:58 -- json_config/common.sh@40 -- # (( i++ )) 00:17:28.193 15:35:58 -- json_config/common.sh@40 -- # (( i < 30 )) 00:17:28.193 15:35:58 -- json_config/common.sh@41 -- # kill -0 61229 00:17:28.193 15:35:58 -- json_config/common.sh@42 -- # app_pid["$app"]= 00:17:28.193 15:35:58 -- json_config/common.sh@43 -- # break 00:17:28.194 15:35:58 -- json_config/common.sh@48 -- # [[ -n '' ]] 00:17:28.194 SPDK target shutdown done 00:17:28.194 15:35:58 -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:17:28.194 Success 00:17:28.194 15:35:58 -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:17:28.194 00:17:28.194 real 0m1.609s 00:17:28.194 user 0m1.541s 00:17:28.194 sys 0m0.421s 00:17:28.194 15:35:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:28.194 15:35:58 -- common/autotest_common.sh@10 -- # set +x 00:17:28.194 ************************************ 00:17:28.194 END TEST json_config_extra_key 00:17:28.194 ************************************ 00:17:28.455 15:35:58 -- spdk/autotest.sh@170 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:17:28.455 15:35:58 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:28.455 15:35:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:28.455 15:35:58 -- common/autotest_common.sh@10 -- # set +x 00:17:28.455 ************************************ 00:17:28.455 START TEST alias_rpc 00:17:28.455 ************************************ 00:17:28.455 15:35:58 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:17:28.455 * Looking for test storage... 00:17:28.455 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:17:28.455 15:35:58 -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:17:28.455 15:35:58 -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=61311 00:17:28.455 15:35:58 -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:28.455 15:35:58 -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 61311 00:17:28.455 15:35:58 -- common/autotest_common.sh@817 -- # '[' -z 61311 ']' 00:17:28.455 15:35:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:28.455 15:35:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:28.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:28.455 15:35:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:28.455 15:35:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:28.455 15:35:58 -- common/autotest_common.sh@10 -- # set +x 00:17:28.455 [2024-04-26 15:35:58.740666] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:17:28.455 [2024-04-26 15:35:58.741279] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61311 ] 00:17:28.713 [2024-04-26 15:35:58.879543] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:28.713 [2024-04-26 15:35:58.998703] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:29.647 15:35:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:29.647 15:35:59 -- common/autotest_common.sh@850 -- # return 0 00:17:29.647 15:35:59 -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:17:29.914 15:36:00 -- alias_rpc/alias_rpc.sh@19 -- # killprocess 61311 00:17:29.914 15:36:00 -- common/autotest_common.sh@936 -- # '[' -z 61311 ']' 00:17:29.914 15:36:00 -- common/autotest_common.sh@940 -- # kill -0 61311 00:17:29.914 15:36:00 -- common/autotest_common.sh@941 -- # uname 00:17:29.914 15:36:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:29.914 15:36:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61311 00:17:29.914 killing process with pid 61311 00:17:29.914 15:36:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:29.914 15:36:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:29.914 15:36:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61311' 00:17:29.914 15:36:00 -- common/autotest_common.sh@955 -- # kill 61311 00:17:29.914 15:36:00 -- common/autotest_common.sh@960 -- # wait 61311 00:17:30.480 ************************************ 00:17:30.480 END TEST alias_rpc 00:17:30.480 ************************************ 00:17:30.480 00:17:30.480 real 0m1.931s 00:17:30.480 user 0m2.251s 00:17:30.480 sys 0m0.457s 00:17:30.480 15:36:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:30.480 15:36:00 -- common/autotest_common.sh@10 -- # set +x 00:17:30.480 15:36:00 -- spdk/autotest.sh@172 -- # [[ 1 -eq 0 ]] 00:17:30.480 15:36:00 -- spdk/autotest.sh@176 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:17:30.480 15:36:00 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:30.480 15:36:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:30.480 15:36:00 -- common/autotest_common.sh@10 -- # set +x 00:17:30.480 ************************************ 00:17:30.480 START TEST dpdk_mem_utility 00:17:30.480 ************************************ 00:17:30.480 15:36:00 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:17:30.480 * Looking for test storage... 00:17:30.480 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:17:30.480 15:36:00 -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:17:30.480 15:36:00 -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=61408 00:17:30.480 15:36:00 -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:30.480 15:36:00 -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 61408 00:17:30.480 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:30.480 15:36:00 -- common/autotest_common.sh@817 -- # '[' -z 61408 ']' 00:17:30.480 15:36:00 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:30.480 15:36:00 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:30.480 15:36:00 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:30.480 15:36:00 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:30.480 15:36:00 -- common/autotest_common.sh@10 -- # set +x 00:17:30.480 [2024-04-26 15:36:00.758904] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:17:30.480 [2024-04-26 15:36:00.759002] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61408 ] 00:17:30.738 [2024-04-26 15:36:00.893116] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.738 [2024-04-26 15:36:01.012610] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:31.683 15:36:01 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:31.683 15:36:01 -- common/autotest_common.sh@850 -- # return 0 00:17:31.683 15:36:01 -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:17:31.684 15:36:01 -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:17:31.684 15:36:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:31.684 15:36:01 -- common/autotest_common.sh@10 -- # set +x 00:17:31.684 { 00:17:31.684 "filename": "/tmp/spdk_mem_dump.txt" 00:17:31.684 } 00:17:31.684 15:36:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:31.684 15:36:01 -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:17:31.684 DPDK memory size 814.000000 MiB in 1 heap(s) 00:17:31.684 1 heaps totaling size 814.000000 MiB 00:17:31.684 size: 814.000000 MiB heap id: 0 00:17:31.684 end heaps---------- 00:17:31.684 8 mempools totaling size 598.116089 MiB 00:17:31.684 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:17:31.684 size: 158.602051 MiB name: PDU_data_out_Pool 00:17:31.684 size: 84.521057 MiB name: bdev_io_61408 00:17:31.684 size: 51.011292 MiB name: evtpool_61408 00:17:31.684 size: 50.003479 MiB name: msgpool_61408 00:17:31.684 size: 21.763794 MiB name: PDU_Pool 00:17:31.684 size: 19.513306 MiB name: SCSI_TASK_Pool 00:17:31.684 size: 0.026123 MiB name: Session_Pool 00:17:31.684 end mempools------- 00:17:31.684 6 memzones totaling size 4.142822 MiB 00:17:31.684 size: 1.000366 MiB name: RG_ring_0_61408 00:17:31.684 size: 1.000366 MiB name: RG_ring_1_61408 00:17:31.684 size: 1.000366 MiB name: RG_ring_4_61408 00:17:31.684 size: 1.000366 MiB name: RG_ring_5_61408 00:17:31.684 size: 0.125366 MiB name: RG_ring_2_61408 00:17:31.684 size: 0.015991 MiB name: RG_ring_3_61408 00:17:31.684 end memzones------- 00:17:31.684 15:36:01 -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:17:31.684 heap id: 0 total size: 814.000000 MiB number of busy elements: 215 number of free elements: 15 00:17:31.684 list of free elements. size: 12.487488 MiB 00:17:31.684 element at address: 0x200000400000 with size: 1.999512 MiB 00:17:31.684 element at address: 0x200018e00000 with size: 0.999878 MiB 00:17:31.684 element at address: 0x200019000000 with size: 0.999878 MiB 00:17:31.684 element at address: 0x200003e00000 with size: 0.996277 MiB 00:17:31.684 element at address: 0x200031c00000 with size: 0.994446 MiB 00:17:31.684 element at address: 0x200013800000 with size: 0.978699 MiB 00:17:31.684 element at address: 0x200007000000 with size: 0.959839 MiB 00:17:31.684 element at address: 0x200019200000 with size: 0.936584 MiB 00:17:31.684 element at address: 0x200000200000 with size: 0.837036 MiB 00:17:31.684 element at address: 0x20001aa00000 with size: 0.572815 MiB 00:17:31.684 element at address: 0x20000b200000 with size: 0.489990 MiB 00:17:31.684 element at address: 0x200000800000 with size: 0.487061 MiB 00:17:31.684 element at address: 0x200019400000 with size: 0.485657 MiB 00:17:31.684 element at address: 0x200027e00000 with size: 0.398132 MiB 00:17:31.684 element at address: 0x200003a00000 with size: 0.351685 MiB 00:17:31.684 list of standard malloc elements. size: 199.249939 MiB 00:17:31.684 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:17:31.684 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:17:31.684 element at address: 0x200018efff80 with size: 1.000122 MiB 00:17:31.684 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:17:31.684 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:17:31.684 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:17:31.684 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:17:31.684 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:17:31.684 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:17:31.684 element at address: 0x2000002d6480 with size: 0.000183 MiB 00:17:31.684 element at address: 0x2000002d6540 with size: 0.000183 MiB 00:17:31.684 element at address: 0x2000002d6600 with size: 0.000183 MiB 00:17:31.684 element at address: 0x2000002d66c0 with size: 0.000183 MiB 00:17:31.684 element at address: 0x2000002d6780 with size: 0.000183 MiB 00:17:31.684 element at address: 0x2000002d6840 with size: 0.000183 MiB 00:17:31.684 element at address: 0x2000002d6900 with size: 0.000183 MiB 00:17:31.684 element at address: 0x2000002d69c0 with size: 0.000183 MiB 00:17:31.684 element at address: 0x2000002d6a80 with size: 0.000183 MiB 00:17:31.684 element at address: 0x2000002d6b40 with size: 0.000183 MiB 00:17:31.684 element at address: 0x2000002d6c00 with size: 0.000183 MiB 00:17:31.684 element at address: 0x2000002d6cc0 with size: 0.000183 MiB 00:17:31.684 element at address: 0x2000002d6d80 with size: 0.000183 MiB 00:17:31.684 element at address: 0x2000002d6e40 with size: 0.000183 MiB 00:17:31.684 element at address: 0x2000002d6f00 with size: 0.000183 MiB 00:17:31.684 element at address: 0x2000002d6fc0 with size: 0.000183 MiB 00:17:31.684 element at address: 0x2000002d71c0 with size: 0.000183 MiB 00:17:31.684 element at address: 0x2000002d7280 with size: 0.000183 MiB 00:17:31.684 element at address: 0x2000002d7340 with size: 0.000183 MiB 00:17:31.684 element at address: 0x2000002d7400 with size: 0.000183 MiB 00:17:31.684 element at address: 0x2000002d74c0 with size: 0.000183 MiB 00:17:31.684 element at address: 0x2000002d7580 with size: 0.000183 MiB 00:17:31.684 element at address: 0x2000002d7640 with size: 0.000183 MiB 00:17:31.684 element at address: 0x2000002d7700 with size: 0.000183 MiB 00:17:31.684 element at address: 0x2000002d77c0 with size: 0.000183 MiB 00:17:31.684 element at address: 0x2000002d7880 with size: 0.000183 MiB 00:17:31.684 element at address: 0x2000002d7940 with size: 0.000183 MiB 00:17:31.684 element at address: 0x2000002d7a00 with size: 0.000183 MiB 00:17:31.684 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:17:31.684 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:17:31.684 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:17:31.684 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:17:31.684 element at address: 0x20000087cb00 with size: 0.000183 MiB 00:17:31.684 element at address: 0x20000087cbc0 with size: 0.000183 MiB 00:17:31.684 element at address: 0x20000087cc80 with size: 0.000183 MiB 00:17:31.684 element at address: 0x20000087cd40 with size: 0.000183 MiB 00:17:31.684 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:17:31.684 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:17:31.684 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:17:31.684 element at address: 0x200003a5a080 with size: 0.000183 MiB 00:17:31.684 element at address: 0x200003a5a140 with size: 0.000183 MiB 00:17:31.684 element at address: 0x200003a5a200 with size: 0.000183 MiB 00:17:31.684 element at address: 0x200003a5a2c0 with size: 0.000183 MiB 00:17:31.684 element at address: 0x200003a5a380 with size: 0.000183 MiB 00:17:31.684 element at address: 0x200003a5a440 with size: 0.000183 MiB 00:17:31.684 element at address: 0x200003a5a500 with size: 0.000183 MiB 00:17:31.684 element at address: 0x200003a5a5c0 with size: 0.000183 MiB 00:17:31.684 element at address: 0x200003a5a680 with size: 0.000183 MiB 00:17:31.684 element at address: 0x200003a5a740 with size: 0.000183 MiB 00:17:31.684 element at address: 0x200003a5a800 with size: 0.000183 MiB 00:17:31.684 element at address: 0x200003a5a8c0 with size: 0.000183 MiB 00:17:31.684 element at address: 0x200003a5a980 with size: 0.000183 MiB 00:17:31.684 element at address: 0x200003a5aa40 with size: 0.000183 MiB 00:17:31.684 element at address: 0x200003a5ab00 with size: 0.000183 MiB 00:17:31.684 element at address: 0x200003a5abc0 with size: 0.000183 MiB 00:17:31.684 element at address: 0x200003a5ac80 with size: 0.000183 MiB 00:17:31.684 element at address: 0x200003a5ad40 with size: 0.000183 MiB 00:17:31.684 element at address: 0x200003a5ae00 with size: 0.000183 MiB 00:17:31.684 element at address: 0x200003a5aec0 with size: 0.000183 MiB 00:17:31.684 element at address: 0x200003a5af80 with size: 0.000183 MiB 00:17:31.684 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:17:31.684 element at address: 0x200003adb300 with size: 0.000183 MiB 00:17:31.684 element at address: 0x200003adb500 with size: 0.000183 MiB 00:17:31.684 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:17:31.684 element at address: 0x200003affa80 with size: 0.000183 MiB 00:17:31.684 element at address: 0x200003affb40 with size: 0.000183 MiB 00:17:31.684 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:17:31.684 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:17:31.684 element at address: 0x20000b27d700 with size: 0.000183 MiB 00:17:31.684 element at address: 0x20000b27d7c0 with size: 0.000183 MiB 00:17:31.684 element at address: 0x20000b27d880 with size: 0.000183 MiB 00:17:31.684 element at address: 0x20000b27d940 with size: 0.000183 MiB 00:17:31.684 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:17:31.684 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:17:31.684 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:17:31.684 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:17:31.684 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:17:31.684 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:17:31.684 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:17:31.684 element at address: 0x20001aa92a40 with size: 0.000183 MiB 00:17:31.684 element at address: 0x20001aa92b00 with size: 0.000183 MiB 00:17:31.684 element at address: 0x20001aa92bc0 with size: 0.000183 MiB 00:17:31.684 element at address: 0x20001aa92c80 with size: 0.000183 MiB 00:17:31.684 element at address: 0x20001aa92d40 with size: 0.000183 MiB 00:17:31.684 element at address: 0x20001aa92e00 with size: 0.000183 MiB 00:17:31.684 element at address: 0x20001aa92ec0 with size: 0.000183 MiB 00:17:31.684 element at address: 0x20001aa92f80 with size: 0.000183 MiB 00:17:31.684 element at address: 0x20001aa93040 with size: 0.000183 MiB 00:17:31.684 element at address: 0x20001aa93100 with size: 0.000183 MiB 00:17:31.684 element at address: 0x20001aa931c0 with size: 0.000183 MiB 00:17:31.684 element at address: 0x20001aa93280 with size: 0.000183 MiB 00:17:31.684 element at address: 0x20001aa93340 with size: 0.000183 MiB 00:17:31.684 element at address: 0x20001aa93400 with size: 0.000183 MiB 00:17:31.684 element at address: 0x20001aa934c0 with size: 0.000183 MiB 00:17:31.684 element at address: 0x20001aa93580 with size: 0.000183 MiB 00:17:31.684 element at address: 0x20001aa93640 with size: 0.000183 MiB 00:17:31.684 element at address: 0x20001aa93700 with size: 0.000183 MiB 00:17:31.684 element at address: 0x20001aa937c0 with size: 0.000183 MiB 00:17:31.684 element at address: 0x20001aa93880 with size: 0.000183 MiB 00:17:31.684 element at address: 0x20001aa93940 with size: 0.000183 MiB 00:17:31.684 element at address: 0x20001aa93a00 with size: 0.000183 MiB 00:17:31.684 element at address: 0x20001aa93ac0 with size: 0.000183 MiB 00:17:31.684 element at address: 0x20001aa93b80 with size: 0.000183 MiB 00:17:31.684 element at address: 0x20001aa93c40 with size: 0.000183 MiB 00:17:31.684 element at address: 0x20001aa93d00 with size: 0.000183 MiB 00:17:31.684 element at address: 0x20001aa93dc0 with size: 0.000183 MiB 00:17:31.684 element at address: 0x20001aa93e80 with size: 0.000183 MiB 00:17:31.684 element at address: 0x20001aa93f40 with size: 0.000183 MiB 00:17:31.684 element at address: 0x20001aa94000 with size: 0.000183 MiB 00:17:31.684 element at address: 0x20001aa940c0 with size: 0.000183 MiB 00:17:31.684 element at address: 0x20001aa94180 with size: 0.000183 MiB 00:17:31.684 element at address: 0x20001aa94240 with size: 0.000183 MiB 00:17:31.684 element at address: 0x20001aa94300 with size: 0.000183 MiB 00:17:31.684 element at address: 0x20001aa943c0 with size: 0.000183 MiB 00:17:31.684 element at address: 0x20001aa94480 with size: 0.000183 MiB 00:17:31.684 element at address: 0x20001aa94540 with size: 0.000183 MiB 00:17:31.684 element at address: 0x20001aa94600 with size: 0.000183 MiB 00:17:31.684 element at address: 0x20001aa946c0 with size: 0.000183 MiB 00:17:31.684 element at address: 0x20001aa94780 with size: 0.000183 MiB 00:17:31.684 element at address: 0x20001aa94840 with size: 0.000183 MiB 00:17:31.684 element at address: 0x20001aa94900 with size: 0.000183 MiB 00:17:31.684 element at address: 0x20001aa949c0 with size: 0.000183 MiB 00:17:31.684 element at address: 0x20001aa94a80 with size: 0.000183 MiB 00:17:31.684 element at address: 0x20001aa94b40 with size: 0.000183 MiB 00:17:31.684 element at address: 0x20001aa94c00 with size: 0.000183 MiB 00:17:31.684 element at address: 0x20001aa94cc0 with size: 0.000183 MiB 00:17:31.684 element at address: 0x20001aa94d80 with size: 0.000183 MiB 00:17:31.684 element at address: 0x20001aa94e40 with size: 0.000183 MiB 00:17:31.684 element at address: 0x20001aa94f00 with size: 0.000183 MiB 00:17:31.684 element at address: 0x20001aa94fc0 with size: 0.000183 MiB 00:17:31.684 element at address: 0x20001aa95080 with size: 0.000183 MiB 00:17:31.684 element at address: 0x20001aa95140 with size: 0.000183 MiB 00:17:31.684 element at address: 0x20001aa95200 with size: 0.000183 MiB 00:17:31.684 element at address: 0x20001aa952c0 with size: 0.000183 MiB 00:17:31.684 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:17:31.684 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:17:31.684 element at address: 0x200027e65ec0 with size: 0.000183 MiB 00:17:31.684 element at address: 0x200027e65f80 with size: 0.000183 MiB 00:17:31.684 element at address: 0x200027e6cb80 with size: 0.000183 MiB 00:17:31.685 element at address: 0x200027e6cd80 with size: 0.000183 MiB 00:17:31.685 element at address: 0x200027e6ce40 with size: 0.000183 MiB 00:17:31.685 element at address: 0x200027e6cf00 with size: 0.000183 MiB 00:17:31.685 element at address: 0x200027e6cfc0 with size: 0.000183 MiB 00:17:31.685 element at address: 0x200027e6d080 with size: 0.000183 MiB 00:17:31.685 element at address: 0x200027e6d140 with size: 0.000183 MiB 00:17:31.685 element at address: 0x200027e6d200 with size: 0.000183 MiB 00:17:31.685 element at address: 0x200027e6d2c0 with size: 0.000183 MiB 00:17:31.685 element at address: 0x200027e6d380 with size: 0.000183 MiB 00:17:31.685 element at address: 0x200027e6d440 with size: 0.000183 MiB 00:17:31.685 element at address: 0x200027e6d500 with size: 0.000183 MiB 00:17:31.685 element at address: 0x200027e6d5c0 with size: 0.000183 MiB 00:17:31.685 element at address: 0x200027e6d680 with size: 0.000183 MiB 00:17:31.685 element at address: 0x200027e6d740 with size: 0.000183 MiB 00:17:31.685 element at address: 0x200027e6d800 with size: 0.000183 MiB 00:17:31.685 element at address: 0x200027e6d8c0 with size: 0.000183 MiB 00:17:31.685 element at address: 0x200027e6d980 with size: 0.000183 MiB 00:17:31.685 element at address: 0x200027e6da40 with size: 0.000183 MiB 00:17:31.685 element at address: 0x200027e6db00 with size: 0.000183 MiB 00:17:31.685 element at address: 0x200027e6dbc0 with size: 0.000183 MiB 00:17:31.685 element at address: 0x200027e6dc80 with size: 0.000183 MiB 00:17:31.685 element at address: 0x200027e6dd40 with size: 0.000183 MiB 00:17:31.685 element at address: 0x200027e6de00 with size: 0.000183 MiB 00:17:31.685 element at address: 0x200027e6dec0 with size: 0.000183 MiB 00:17:31.685 element at address: 0x200027e6df80 with size: 0.000183 MiB 00:17:31.685 element at address: 0x200027e6e040 with size: 0.000183 MiB 00:17:31.685 element at address: 0x200027e6e100 with size: 0.000183 MiB 00:17:31.685 element at address: 0x200027e6e1c0 with size: 0.000183 MiB 00:17:31.685 element at address: 0x200027e6e280 with size: 0.000183 MiB 00:17:31.685 element at address: 0x200027e6e340 with size: 0.000183 MiB 00:17:31.685 element at address: 0x200027e6e400 with size: 0.000183 MiB 00:17:31.685 element at address: 0x200027e6e4c0 with size: 0.000183 MiB 00:17:31.685 element at address: 0x200027e6e580 with size: 0.000183 MiB 00:17:31.685 element at address: 0x200027e6e640 with size: 0.000183 MiB 00:17:31.685 element at address: 0x200027e6e700 with size: 0.000183 MiB 00:17:31.685 element at address: 0x200027e6e7c0 with size: 0.000183 MiB 00:17:31.685 element at address: 0x200027e6e880 with size: 0.000183 MiB 00:17:31.685 element at address: 0x200027e6e940 with size: 0.000183 MiB 00:17:31.685 element at address: 0x200027e6ea00 with size: 0.000183 MiB 00:17:31.685 element at address: 0x200027e6eac0 with size: 0.000183 MiB 00:17:31.685 element at address: 0x200027e6eb80 with size: 0.000183 MiB 00:17:31.685 element at address: 0x200027e6ec40 with size: 0.000183 MiB 00:17:31.685 element at address: 0x200027e6ed00 with size: 0.000183 MiB 00:17:31.685 element at address: 0x200027e6edc0 with size: 0.000183 MiB 00:17:31.685 element at address: 0x200027e6ee80 with size: 0.000183 MiB 00:17:31.685 element at address: 0x200027e6ef40 with size: 0.000183 MiB 00:17:31.685 element at address: 0x200027e6f000 with size: 0.000183 MiB 00:17:31.685 element at address: 0x200027e6f0c0 with size: 0.000183 MiB 00:17:31.685 element at address: 0x200027e6f180 with size: 0.000183 MiB 00:17:31.685 element at address: 0x200027e6f240 with size: 0.000183 MiB 00:17:31.685 element at address: 0x200027e6f300 with size: 0.000183 MiB 00:17:31.685 element at address: 0x200027e6f3c0 with size: 0.000183 MiB 00:17:31.685 element at address: 0x200027e6f480 with size: 0.000183 MiB 00:17:31.685 element at address: 0x200027e6f540 with size: 0.000183 MiB 00:17:31.685 element at address: 0x200027e6f600 with size: 0.000183 MiB 00:17:31.685 element at address: 0x200027e6f6c0 with size: 0.000183 MiB 00:17:31.685 element at address: 0x200027e6f780 with size: 0.000183 MiB 00:17:31.685 element at address: 0x200027e6f840 with size: 0.000183 MiB 00:17:31.685 element at address: 0x200027e6f900 with size: 0.000183 MiB 00:17:31.685 element at address: 0x200027e6f9c0 with size: 0.000183 MiB 00:17:31.685 element at address: 0x200027e6fa80 with size: 0.000183 MiB 00:17:31.685 element at address: 0x200027e6fb40 with size: 0.000183 MiB 00:17:31.685 element at address: 0x200027e6fc00 with size: 0.000183 MiB 00:17:31.685 element at address: 0x200027e6fcc0 with size: 0.000183 MiB 00:17:31.685 element at address: 0x200027e6fd80 with size: 0.000183 MiB 00:17:31.685 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:17:31.685 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:17:31.685 list of memzone associated elements. size: 602.262573 MiB 00:17:31.685 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:17:31.685 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:17:31.685 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:17:31.685 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:17:31.685 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:17:31.685 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_61408_0 00:17:31.685 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:17:31.685 associated memzone info: size: 48.002930 MiB name: MP_evtpool_61408_0 00:17:31.685 element at address: 0x200003fff380 with size: 48.003052 MiB 00:17:31.685 associated memzone info: size: 48.002930 MiB name: MP_msgpool_61408_0 00:17:31.685 element at address: 0x2000195be940 with size: 20.255554 MiB 00:17:31.685 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:17:31.685 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:17:31.685 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:17:31.685 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:17:31.685 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_61408 00:17:31.685 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:17:31.685 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_61408 00:17:31.685 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:17:31.685 associated memzone info: size: 1.007996 MiB name: MP_evtpool_61408 00:17:31.685 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:17:31.685 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:17:31.685 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:17:31.685 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:17:31.685 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:17:31.685 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:17:31.685 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:17:31.685 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:17:31.685 element at address: 0x200003eff180 with size: 1.000488 MiB 00:17:31.685 associated memzone info: size: 1.000366 MiB name: RG_ring_0_61408 00:17:31.685 element at address: 0x200003affc00 with size: 1.000488 MiB 00:17:31.685 associated memzone info: size: 1.000366 MiB name: RG_ring_1_61408 00:17:31.685 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:17:31.685 associated memzone info: size: 1.000366 MiB name: RG_ring_4_61408 00:17:31.685 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:17:31.685 associated memzone info: size: 1.000366 MiB name: RG_ring_5_61408 00:17:31.685 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:17:31.685 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_61408 00:17:31.685 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:17:31.685 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:17:31.685 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:17:31.685 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:17:31.685 element at address: 0x20001947c540 with size: 0.250488 MiB 00:17:31.685 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:17:31.685 element at address: 0x200003adf880 with size: 0.125488 MiB 00:17:31.685 associated memzone info: size: 0.125366 MiB name: RG_ring_2_61408 00:17:31.685 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:17:31.685 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:17:31.685 element at address: 0x200027e66040 with size: 0.023743 MiB 00:17:31.685 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:17:31.685 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:17:31.685 associated memzone info: size: 0.015991 MiB name: RG_ring_3_61408 00:17:31.685 element at address: 0x200027e6c180 with size: 0.002441 MiB 00:17:31.685 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:17:31.685 element at address: 0x2000002d7080 with size: 0.000305 MiB 00:17:31.685 associated memzone info: size: 0.000183 MiB name: MP_msgpool_61408 00:17:31.685 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:17:31.685 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_61408 00:17:31.685 element at address: 0x200027e6cc40 with size: 0.000305 MiB 00:17:31.685 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:17:31.685 15:36:01 -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:17:31.685 15:36:01 -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 61408 00:17:31.685 15:36:01 -- common/autotest_common.sh@936 -- # '[' -z 61408 ']' 00:17:31.685 15:36:01 -- common/autotest_common.sh@940 -- # kill -0 61408 00:17:31.685 15:36:01 -- common/autotest_common.sh@941 -- # uname 00:17:31.685 15:36:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:31.685 15:36:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61408 00:17:31.685 killing process with pid 61408 00:17:31.685 15:36:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:17:31.685 15:36:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:17:31.685 15:36:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61408' 00:17:31.685 15:36:01 -- common/autotest_common.sh@955 -- # kill 61408 00:17:31.685 15:36:01 -- common/autotest_common.sh@960 -- # wait 61408 00:17:32.268 ************************************ 00:17:32.268 END TEST dpdk_mem_utility 00:17:32.268 ************************************ 00:17:32.268 00:17:32.268 real 0m1.628s 00:17:32.268 user 0m1.727s 00:17:32.268 sys 0m0.412s 00:17:32.268 15:36:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:32.268 15:36:02 -- common/autotest_common.sh@10 -- # set +x 00:17:32.268 15:36:02 -- spdk/autotest.sh@177 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:17:32.268 15:36:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:32.268 15:36:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:32.268 15:36:02 -- common/autotest_common.sh@10 -- # set +x 00:17:32.268 ************************************ 00:17:32.268 START TEST event 00:17:32.268 ************************************ 00:17:32.268 15:36:02 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:17:32.268 * Looking for test storage... 00:17:32.268 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:17:32.268 15:36:02 -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:17:32.268 15:36:02 -- bdev/nbd_common.sh@6 -- # set -e 00:17:32.268 15:36:02 -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:17:32.268 15:36:02 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:17:32.268 15:36:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:32.268 15:36:02 -- common/autotest_common.sh@10 -- # set +x 00:17:32.268 ************************************ 00:17:32.268 START TEST event_perf 00:17:32.268 ************************************ 00:17:32.268 15:36:02 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:17:32.526 Running I/O for 1 seconds...[2024-04-26 15:36:02.569217] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:17:32.526 [2024-04-26 15:36:02.569291] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61508 ] 00:17:32.526 [2024-04-26 15:36:02.704004] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:32.784 [2024-04-26 15:36:02.824546] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:32.784 [2024-04-26 15:36:02.824698] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:32.784 Running I/O for 1 seconds...[2024-04-26 15:36:02.824833] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:32.784 [2024-04-26 15:36:02.824847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:33.718 00:17:33.718 lcore 0: 189053 00:17:33.718 lcore 1: 189052 00:17:33.718 lcore 2: 189052 00:17:33.718 lcore 3: 189053 00:17:33.718 done. 00:17:33.718 00:17:33.718 real 0m1.390s 00:17:33.718 ************************************ 00:17:33.718 END TEST event_perf 00:17:33.718 ************************************ 00:17:33.718 user 0m4.204s 00:17:33.718 sys 0m0.063s 00:17:33.718 15:36:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:33.718 15:36:03 -- common/autotest_common.sh@10 -- # set +x 00:17:33.718 15:36:03 -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:17:33.718 15:36:03 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:17:33.718 15:36:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:33.718 15:36:03 -- common/autotest_common.sh@10 -- # set +x 00:17:33.975 ************************************ 00:17:33.975 START TEST event_reactor 00:17:33.975 ************************************ 00:17:33.975 15:36:04 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:17:33.975 [2024-04-26 15:36:04.066240] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:17:33.975 [2024-04-26 15:36:04.066326] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61555 ] 00:17:33.975 [2024-04-26 15:36:04.197784] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:34.234 [2024-04-26 15:36:04.315730] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:35.167 test_start 00:17:35.167 oneshot 00:17:35.167 tick 100 00:17:35.167 tick 100 00:17:35.167 tick 250 00:17:35.167 tick 100 00:17:35.167 tick 100 00:17:35.167 tick 100 00:17:35.167 tick 250 00:17:35.167 tick 500 00:17:35.167 tick 100 00:17:35.167 tick 100 00:17:35.167 tick 250 00:17:35.167 tick 100 00:17:35.167 tick 100 00:17:35.167 test_end 00:17:35.167 00:17:35.167 real 0m1.378s 00:17:35.167 user 0m1.212s 00:17:35.167 sys 0m0.057s 00:17:35.167 15:36:05 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:35.167 15:36:05 -- common/autotest_common.sh@10 -- # set +x 00:17:35.167 ************************************ 00:17:35.167 END TEST event_reactor 00:17:35.167 ************************************ 00:17:35.425 15:36:05 -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:17:35.425 15:36:05 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:17:35.425 15:36:05 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:35.425 15:36:05 -- common/autotest_common.sh@10 -- # set +x 00:17:35.425 ************************************ 00:17:35.425 START TEST event_reactor_perf 00:17:35.425 ************************************ 00:17:35.425 15:36:05 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:17:35.425 [2024-04-26 15:36:05.554067] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:17:35.425 [2024-04-26 15:36:05.554164] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61590 ] 00:17:35.425 [2024-04-26 15:36:05.687081] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.683 [2024-04-26 15:36:05.804259] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:37.057 test_start 00:17:37.057 test_end 00:17:37.057 Performance: 374025 events per second 00:17:37.057 00:17:37.057 real 0m1.380s 00:17:37.057 user 0m1.216s 00:17:37.057 sys 0m0.057s 00:17:37.057 ************************************ 00:17:37.057 END TEST event_reactor_perf 00:17:37.057 ************************************ 00:17:37.057 15:36:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:37.057 15:36:06 -- common/autotest_common.sh@10 -- # set +x 00:17:37.057 15:36:06 -- event/event.sh@49 -- # uname -s 00:17:37.057 15:36:06 -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:17:37.057 15:36:06 -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:17:37.057 15:36:06 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:37.057 15:36:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:37.057 15:36:06 -- common/autotest_common.sh@10 -- # set +x 00:17:37.057 ************************************ 00:17:37.057 START TEST event_scheduler 00:17:37.057 ************************************ 00:17:37.057 15:36:07 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:17:37.057 * Looking for test storage... 00:17:37.057 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:17:37.057 15:36:07 -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:17:37.057 15:36:07 -- scheduler/scheduler.sh@35 -- # scheduler_pid=61658 00:17:37.057 15:36:07 -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:17:37.057 15:36:07 -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:17:37.057 15:36:07 -- scheduler/scheduler.sh@37 -- # waitforlisten 61658 00:17:37.057 15:36:07 -- common/autotest_common.sh@817 -- # '[' -z 61658 ']' 00:17:37.057 15:36:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:37.057 15:36:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:37.057 15:36:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:37.057 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:37.057 15:36:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:37.057 15:36:07 -- common/autotest_common.sh@10 -- # set +x 00:17:37.057 [2024-04-26 15:36:07.166802] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:17:37.057 [2024-04-26 15:36:07.166898] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61658 ] 00:17:37.057 [2024-04-26 15:36:07.306450] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:37.316 [2024-04-26 15:36:07.434297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:37.316 [2024-04-26 15:36:07.434413] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:37.316 [2024-04-26 15:36:07.434526] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:17:37.316 [2024-04-26 15:36:07.434531] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:17:38.251 15:36:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:38.251 15:36:08 -- common/autotest_common.sh@850 -- # return 0 00:17:38.251 15:36:08 -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:17:38.251 15:36:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:38.251 15:36:08 -- common/autotest_common.sh@10 -- # set +x 00:17:38.251 POWER: Env isn't set yet! 00:17:38.251 POWER: Attempting to initialise ACPI cpufreq power management... 00:17:38.251 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:17:38.251 POWER: Cannot set governor of lcore 0 to userspace 00:17:38.251 POWER: Attempting to initialise PSTAT power management... 00:17:38.251 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:17:38.251 POWER: Cannot set governor of lcore 0 to performance 00:17:38.251 POWER: Attempting to initialise AMD PSTATE power management... 00:17:38.251 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:17:38.251 POWER: Cannot set governor of lcore 0 to userspace 00:17:38.251 POWER: Attempting to initialise CPPC power management... 00:17:38.251 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:17:38.251 POWER: Cannot set governor of lcore 0 to userspace 00:17:38.251 POWER: Attempting to initialise VM power management... 00:17:38.251 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:17:38.251 POWER: Unable to set Power Management Environment for lcore 0 00:17:38.251 [2024-04-26 15:36:08.210409] dpdk_governor.c: 88:_init_core: *ERROR*: Failed to initialize on core0 00:17:38.251 [2024-04-26 15:36:08.210447] dpdk_governor.c: 118:_init: *ERROR*: Failed to initialize on core0 00:17:38.251 [2024-04-26 15:36:08.210478] scheduler_dynamic.c: 238:init: *NOTICE*: Unable to initialize dpdk governor 00:17:38.251 15:36:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:38.251 15:36:08 -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:17:38.251 15:36:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:38.251 15:36:08 -- common/autotest_common.sh@10 -- # set +x 00:17:38.251 [2024-04-26 15:36:08.303610] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:17:38.251 15:36:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:38.251 15:36:08 -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:17:38.251 15:36:08 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:38.251 15:36:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:38.251 15:36:08 -- common/autotest_common.sh@10 -- # set +x 00:17:38.251 ************************************ 00:17:38.251 START TEST scheduler_create_thread 00:17:38.251 ************************************ 00:17:38.251 15:36:08 -- common/autotest_common.sh@1111 -- # scheduler_create_thread 00:17:38.251 15:36:08 -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:17:38.251 15:36:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:38.251 15:36:08 -- common/autotest_common.sh@10 -- # set +x 00:17:38.251 2 00:17:38.251 15:36:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:38.251 15:36:08 -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:17:38.251 15:36:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:38.251 15:36:08 -- common/autotest_common.sh@10 -- # set +x 00:17:38.251 3 00:17:38.251 15:36:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:38.252 15:36:08 -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:17:38.252 15:36:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:38.252 15:36:08 -- common/autotest_common.sh@10 -- # set +x 00:17:38.252 4 00:17:38.252 15:36:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:38.252 15:36:08 -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:17:38.252 15:36:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:38.252 15:36:08 -- common/autotest_common.sh@10 -- # set +x 00:17:38.252 5 00:17:38.252 15:36:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:38.252 15:36:08 -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:17:38.252 15:36:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:38.252 15:36:08 -- common/autotest_common.sh@10 -- # set +x 00:17:38.252 6 00:17:38.252 15:36:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:38.252 15:36:08 -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:17:38.252 15:36:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:38.252 15:36:08 -- common/autotest_common.sh@10 -- # set +x 00:17:38.252 7 00:17:38.252 15:36:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:38.252 15:36:08 -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:17:38.252 15:36:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:38.252 15:36:08 -- common/autotest_common.sh@10 -- # set +x 00:17:38.252 8 00:17:38.252 15:36:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:38.252 15:36:08 -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:17:38.252 15:36:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:38.252 15:36:08 -- common/autotest_common.sh@10 -- # set +x 00:17:38.252 9 00:17:38.252 15:36:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:38.252 15:36:08 -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:17:38.252 15:36:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:38.252 15:36:08 -- common/autotest_common.sh@10 -- # set +x 00:17:38.252 10 00:17:38.252 15:36:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:38.252 15:36:08 -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:17:38.252 15:36:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:38.252 15:36:08 -- common/autotest_common.sh@10 -- # set +x 00:17:38.252 15:36:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:38.252 15:36:08 -- scheduler/scheduler.sh@22 -- # thread_id=11 00:17:38.252 15:36:08 -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:17:38.252 15:36:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:38.252 15:36:08 -- common/autotest_common.sh@10 -- # set +x 00:17:38.819 15:36:08 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:38.819 15:36:08 -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:17:38.819 15:36:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:38.819 15:36:08 -- common/autotest_common.sh@10 -- # set +x 00:17:40.193 15:36:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:40.193 15:36:10 -- scheduler/scheduler.sh@25 -- # thread_id=12 00:17:40.193 15:36:10 -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:17:40.193 15:36:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:17:40.193 15:36:10 -- common/autotest_common.sh@10 -- # set +x 00:17:41.646 15:36:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:17:41.646 ************************************ 00:17:41.646 END TEST scheduler_create_thread 00:17:41.646 ************************************ 00:17:41.646 00:17:41.646 real 0m3.092s 00:17:41.646 user 0m0.017s 00:17:41.646 sys 0m0.007s 00:17:41.646 15:36:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:41.646 15:36:11 -- common/autotest_common.sh@10 -- # set +x 00:17:41.646 15:36:11 -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:17:41.646 15:36:11 -- scheduler/scheduler.sh@46 -- # killprocess 61658 00:17:41.646 15:36:11 -- common/autotest_common.sh@936 -- # '[' -z 61658 ']' 00:17:41.646 15:36:11 -- common/autotest_common.sh@940 -- # kill -0 61658 00:17:41.646 15:36:11 -- common/autotest_common.sh@941 -- # uname 00:17:41.646 15:36:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:17:41.646 15:36:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61658 00:17:41.646 killing process with pid 61658 00:17:41.646 15:36:11 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:17:41.646 15:36:11 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:17:41.646 15:36:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61658' 00:17:41.646 15:36:11 -- common/autotest_common.sh@955 -- # kill 61658 00:17:41.646 15:36:11 -- common/autotest_common.sh@960 -- # wait 61658 00:17:41.646 [2024-04-26 15:36:11.848286] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:17:41.905 ************************************ 00:17:41.905 END TEST event_scheduler 00:17:41.905 ************************************ 00:17:41.905 00:17:41.905 real 0m5.100s 00:17:41.905 user 0m10.040s 00:17:41.905 sys 0m0.401s 00:17:41.905 15:36:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:17:41.905 15:36:12 -- common/autotest_common.sh@10 -- # set +x 00:17:41.905 15:36:12 -- event/event.sh@51 -- # modprobe -n nbd 00:17:41.905 15:36:12 -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:17:41.905 15:36:12 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:17:41.905 15:36:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:17:41.905 15:36:12 -- common/autotest_common.sh@10 -- # set +x 00:17:42.163 ************************************ 00:17:42.163 START TEST app_repeat 00:17:42.163 ************************************ 00:17:42.163 15:36:12 -- common/autotest_common.sh@1111 -- # app_repeat_test 00:17:42.163 15:36:12 -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:42.163 15:36:12 -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:42.163 15:36:12 -- event/event.sh@13 -- # local nbd_list 00:17:42.163 15:36:12 -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:17:42.163 15:36:12 -- event/event.sh@14 -- # local bdev_list 00:17:42.163 15:36:12 -- event/event.sh@15 -- # local repeat_times=4 00:17:42.163 15:36:12 -- event/event.sh@17 -- # modprobe nbd 00:17:42.163 15:36:12 -- event/event.sh@19 -- # repeat_pid=61789 00:17:42.163 15:36:12 -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:17:42.163 Process app_repeat pid: 61789 00:17:42.163 spdk_app_start Round 0 00:17:42.163 15:36:12 -- event/event.sh@21 -- # echo 'Process app_repeat pid: 61789' 00:17:42.163 15:36:12 -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:17:42.163 15:36:12 -- event/event.sh@23 -- # for i in {0..2} 00:17:42.163 15:36:12 -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:17:42.163 15:36:12 -- event/event.sh@25 -- # waitforlisten 61789 /var/tmp/spdk-nbd.sock 00:17:42.163 15:36:12 -- common/autotest_common.sh@817 -- # '[' -z 61789 ']' 00:17:42.163 15:36:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:17:42.163 15:36:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:42.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:17:42.163 15:36:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:17:42.163 15:36:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:42.163 15:36:12 -- common/autotest_common.sh@10 -- # set +x 00:17:42.163 [2024-04-26 15:36:12.277550] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:17:42.163 [2024-04-26 15:36:12.277641] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61789 ] 00:17:42.163 [2024-04-26 15:36:12.412091] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:42.420 [2024-04-26 15:36:12.551027] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:42.420 [2024-04-26 15:36:12.551036] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:42.986 15:36:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:42.986 15:36:13 -- common/autotest_common.sh@850 -- # return 0 00:17:42.986 15:36:13 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:17:43.551 Malloc0 00:17:43.551 15:36:13 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:17:43.551 Malloc1 00:17:43.551 15:36:13 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:17:43.551 15:36:13 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:43.551 15:36:13 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:17:43.551 15:36:13 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:17:43.551 15:36:13 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:43.551 15:36:13 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:17:43.551 15:36:13 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:17:43.551 15:36:13 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:43.551 15:36:13 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:17:43.551 15:36:13 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:43.551 15:36:13 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:43.551 15:36:13 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:43.551 15:36:13 -- bdev/nbd_common.sh@12 -- # local i 00:17:43.551 15:36:13 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:43.551 15:36:13 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:43.551 15:36:13 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:17:43.809 /dev/nbd0 00:17:44.067 15:36:14 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:44.067 15:36:14 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:44.067 15:36:14 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:17:44.067 15:36:14 -- common/autotest_common.sh@855 -- # local i 00:17:44.067 15:36:14 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:17:44.067 15:36:14 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:17:44.067 15:36:14 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:17:44.067 15:36:14 -- common/autotest_common.sh@859 -- # break 00:17:44.067 15:36:14 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:17:44.067 15:36:14 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:17:44.067 15:36:14 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:17:44.067 1+0 records in 00:17:44.067 1+0 records out 00:17:44.067 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000346444 s, 11.8 MB/s 00:17:44.067 15:36:14 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:17:44.067 15:36:14 -- common/autotest_common.sh@872 -- # size=4096 00:17:44.067 15:36:14 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:17:44.067 15:36:14 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:17:44.067 15:36:14 -- common/autotest_common.sh@875 -- # return 0 00:17:44.067 15:36:14 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:44.067 15:36:14 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:44.067 15:36:14 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:17:44.067 /dev/nbd1 00:17:44.350 15:36:14 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:44.350 15:36:14 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:44.350 15:36:14 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:17:44.350 15:36:14 -- common/autotest_common.sh@855 -- # local i 00:17:44.350 15:36:14 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:17:44.350 15:36:14 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:17:44.350 15:36:14 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:17:44.350 15:36:14 -- common/autotest_common.sh@859 -- # break 00:17:44.350 15:36:14 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:17:44.350 15:36:14 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:17:44.350 15:36:14 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:17:44.350 1+0 records in 00:17:44.350 1+0 records out 00:17:44.350 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000379757 s, 10.8 MB/s 00:17:44.350 15:36:14 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:17:44.350 15:36:14 -- common/autotest_common.sh@872 -- # size=4096 00:17:44.350 15:36:14 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:17:44.350 15:36:14 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:17:44.350 15:36:14 -- common/autotest_common.sh@875 -- # return 0 00:17:44.350 15:36:14 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:44.350 15:36:14 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:44.350 15:36:14 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:44.350 15:36:14 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:44.350 15:36:14 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:44.608 15:36:14 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:17:44.608 { 00:17:44.608 "bdev_name": "Malloc0", 00:17:44.608 "nbd_device": "/dev/nbd0" 00:17:44.608 }, 00:17:44.608 { 00:17:44.608 "bdev_name": "Malloc1", 00:17:44.608 "nbd_device": "/dev/nbd1" 00:17:44.608 } 00:17:44.608 ]' 00:17:44.608 15:36:14 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:44.608 15:36:14 -- bdev/nbd_common.sh@64 -- # echo '[ 00:17:44.608 { 00:17:44.608 "bdev_name": "Malloc0", 00:17:44.608 "nbd_device": "/dev/nbd0" 00:17:44.608 }, 00:17:44.608 { 00:17:44.608 "bdev_name": "Malloc1", 00:17:44.608 "nbd_device": "/dev/nbd1" 00:17:44.608 } 00:17:44.608 ]' 00:17:44.608 15:36:14 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:17:44.608 /dev/nbd1' 00:17:44.608 15:36:14 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:17:44.608 /dev/nbd1' 00:17:44.608 15:36:14 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:44.608 15:36:14 -- bdev/nbd_common.sh@65 -- # count=2 00:17:44.608 15:36:14 -- bdev/nbd_common.sh@66 -- # echo 2 00:17:44.608 15:36:14 -- bdev/nbd_common.sh@95 -- # count=2 00:17:44.608 15:36:14 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:17:44.608 15:36:14 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:17:44.608 15:36:14 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:44.608 15:36:14 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:44.608 15:36:14 -- bdev/nbd_common.sh@71 -- # local operation=write 00:17:44.608 15:36:14 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:17:44.608 15:36:14 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:17:44.608 15:36:14 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:17:44.608 256+0 records in 00:17:44.608 256+0 records out 00:17:44.608 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0108262 s, 96.9 MB/s 00:17:44.608 15:36:14 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:44.608 15:36:14 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:17:44.608 256+0 records in 00:17:44.608 256+0 records out 00:17:44.608 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0249331 s, 42.1 MB/s 00:17:44.608 15:36:14 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:44.608 15:36:14 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:17:44.608 256+0 records in 00:17:44.608 256+0 records out 00:17:44.608 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0273272 s, 38.4 MB/s 00:17:44.608 15:36:14 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:17:44.608 15:36:14 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:44.608 15:36:14 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:44.608 15:36:14 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:17:44.608 15:36:14 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:17:44.608 15:36:14 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:17:44.608 15:36:14 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:17:44.608 15:36:14 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:44.608 15:36:14 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:17:44.608 15:36:14 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:44.608 15:36:14 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:17:44.608 15:36:14 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:17:44.608 15:36:14 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:17:44.608 15:36:14 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:44.608 15:36:14 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:44.608 15:36:14 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:44.608 15:36:14 -- bdev/nbd_common.sh@51 -- # local i 00:17:44.608 15:36:14 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:44.608 15:36:14 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:44.865 15:36:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:44.865 15:36:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:44.865 15:36:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:44.865 15:36:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:44.865 15:36:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:44.865 15:36:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:44.865 15:36:15 -- bdev/nbd_common.sh@41 -- # break 00:17:44.865 15:36:15 -- bdev/nbd_common.sh@45 -- # return 0 00:17:44.865 15:36:15 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:44.865 15:36:15 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:17:45.122 15:36:15 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:45.122 15:36:15 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:45.122 15:36:15 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:45.122 15:36:15 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:45.122 15:36:15 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:45.122 15:36:15 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:45.122 15:36:15 -- bdev/nbd_common.sh@41 -- # break 00:17:45.122 15:36:15 -- bdev/nbd_common.sh@45 -- # return 0 00:17:45.122 15:36:15 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:45.122 15:36:15 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:45.122 15:36:15 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:45.380 15:36:15 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:45.380 15:36:15 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:45.380 15:36:15 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:45.637 15:36:15 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:45.637 15:36:15 -- bdev/nbd_common.sh@65 -- # echo '' 00:17:45.637 15:36:15 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:45.637 15:36:15 -- bdev/nbd_common.sh@65 -- # true 00:17:45.637 15:36:15 -- bdev/nbd_common.sh@65 -- # count=0 00:17:45.637 15:36:15 -- bdev/nbd_common.sh@66 -- # echo 0 00:17:45.637 15:36:15 -- bdev/nbd_common.sh@104 -- # count=0 00:17:45.637 15:36:15 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:17:45.637 15:36:15 -- bdev/nbd_common.sh@109 -- # return 0 00:17:45.637 15:36:15 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:17:45.895 15:36:16 -- event/event.sh@35 -- # sleep 3 00:17:46.154 [2024-04-26 15:36:16.251048] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:46.154 [2024-04-26 15:36:16.365859] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:46.154 [2024-04-26 15:36:16.365869] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:46.154 [2024-04-26 15:36:16.421067] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:17:46.154 [2024-04-26 15:36:16.421134] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:17:49.433 15:36:19 -- event/event.sh@23 -- # for i in {0..2} 00:17:49.433 spdk_app_start Round 1 00:17:49.433 15:36:19 -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:17:49.433 15:36:19 -- event/event.sh@25 -- # waitforlisten 61789 /var/tmp/spdk-nbd.sock 00:17:49.433 15:36:19 -- common/autotest_common.sh@817 -- # '[' -z 61789 ']' 00:17:49.433 15:36:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:17:49.433 15:36:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:49.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:17:49.433 15:36:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:17:49.433 15:36:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:49.433 15:36:19 -- common/autotest_common.sh@10 -- # set +x 00:17:49.433 15:36:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:49.433 15:36:19 -- common/autotest_common.sh@850 -- # return 0 00:17:49.433 15:36:19 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:17:49.433 Malloc0 00:17:49.433 15:36:19 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:17:49.691 Malloc1 00:17:49.691 15:36:19 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:17:49.691 15:36:19 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:49.691 15:36:19 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:17:49.691 15:36:19 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:17:49.691 15:36:19 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:49.691 15:36:19 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:17:49.691 15:36:19 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:17:49.691 15:36:19 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:49.691 15:36:19 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:17:49.691 15:36:19 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:49.691 15:36:19 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:49.691 15:36:19 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:49.691 15:36:19 -- bdev/nbd_common.sh@12 -- # local i 00:17:49.691 15:36:19 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:49.691 15:36:19 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:49.691 15:36:19 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:17:49.949 /dev/nbd0 00:17:49.949 15:36:20 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:49.949 15:36:20 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:49.949 15:36:20 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:17:49.949 15:36:20 -- common/autotest_common.sh@855 -- # local i 00:17:49.949 15:36:20 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:17:49.949 15:36:20 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:17:49.949 15:36:20 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:17:49.949 15:36:20 -- common/autotest_common.sh@859 -- # break 00:17:49.949 15:36:20 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:17:49.949 15:36:20 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:17:49.949 15:36:20 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:17:49.949 1+0 records in 00:17:49.949 1+0 records out 00:17:49.949 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000239532 s, 17.1 MB/s 00:17:49.949 15:36:20 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:17:49.949 15:36:20 -- common/autotest_common.sh@872 -- # size=4096 00:17:49.949 15:36:20 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:17:49.949 15:36:20 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:17:49.949 15:36:20 -- common/autotest_common.sh@875 -- # return 0 00:17:49.949 15:36:20 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:49.949 15:36:20 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:49.949 15:36:20 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:17:50.207 /dev/nbd1 00:17:50.207 15:36:20 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:50.207 15:36:20 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:50.207 15:36:20 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:17:50.207 15:36:20 -- common/autotest_common.sh@855 -- # local i 00:17:50.207 15:36:20 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:17:50.207 15:36:20 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:17:50.207 15:36:20 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:17:50.207 15:36:20 -- common/autotest_common.sh@859 -- # break 00:17:50.207 15:36:20 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:17:50.207 15:36:20 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:17:50.207 15:36:20 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:17:50.207 1+0 records in 00:17:50.207 1+0 records out 00:17:50.207 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000421635 s, 9.7 MB/s 00:17:50.207 15:36:20 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:17:50.207 15:36:20 -- common/autotest_common.sh@872 -- # size=4096 00:17:50.207 15:36:20 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:17:50.207 15:36:20 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:17:50.207 15:36:20 -- common/autotest_common.sh@875 -- # return 0 00:17:50.207 15:36:20 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:50.207 15:36:20 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:50.207 15:36:20 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:50.207 15:36:20 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:50.207 15:36:20 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:50.465 15:36:20 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:17:50.465 { 00:17:50.465 "bdev_name": "Malloc0", 00:17:50.465 "nbd_device": "/dev/nbd0" 00:17:50.465 }, 00:17:50.465 { 00:17:50.465 "bdev_name": "Malloc1", 00:17:50.465 "nbd_device": "/dev/nbd1" 00:17:50.465 } 00:17:50.465 ]' 00:17:50.465 15:36:20 -- bdev/nbd_common.sh@64 -- # echo '[ 00:17:50.465 { 00:17:50.465 "bdev_name": "Malloc0", 00:17:50.465 "nbd_device": "/dev/nbd0" 00:17:50.465 }, 00:17:50.465 { 00:17:50.465 "bdev_name": "Malloc1", 00:17:50.465 "nbd_device": "/dev/nbd1" 00:17:50.465 } 00:17:50.465 ]' 00:17:50.465 15:36:20 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:50.465 15:36:20 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:17:50.465 /dev/nbd1' 00:17:50.465 15:36:20 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:17:50.465 /dev/nbd1' 00:17:50.465 15:36:20 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:50.466 15:36:20 -- bdev/nbd_common.sh@65 -- # count=2 00:17:50.466 15:36:20 -- bdev/nbd_common.sh@66 -- # echo 2 00:17:50.466 15:36:20 -- bdev/nbd_common.sh@95 -- # count=2 00:17:50.466 15:36:20 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:17:50.466 15:36:20 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:17:50.466 15:36:20 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:50.466 15:36:20 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:50.466 15:36:20 -- bdev/nbd_common.sh@71 -- # local operation=write 00:17:50.466 15:36:20 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:17:50.466 15:36:20 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:17:50.466 15:36:20 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:17:50.466 256+0 records in 00:17:50.466 256+0 records out 00:17:50.466 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00710126 s, 148 MB/s 00:17:50.466 15:36:20 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:50.466 15:36:20 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:17:50.466 256+0 records in 00:17:50.466 256+0 records out 00:17:50.466 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0254322 s, 41.2 MB/s 00:17:50.466 15:36:20 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:50.466 15:36:20 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:17:50.466 256+0 records in 00:17:50.466 256+0 records out 00:17:50.466 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0275149 s, 38.1 MB/s 00:17:50.466 15:36:20 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:17:50.466 15:36:20 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:50.466 15:36:20 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:50.466 15:36:20 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:17:50.466 15:36:20 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:17:50.466 15:36:20 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:17:50.466 15:36:20 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:17:50.466 15:36:20 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:50.466 15:36:20 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:17:50.466 15:36:20 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:50.466 15:36:20 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:17:50.724 15:36:20 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:17:50.724 15:36:20 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:17:50.724 15:36:20 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:50.724 15:36:20 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:50.724 15:36:20 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:50.724 15:36:20 -- bdev/nbd_common.sh@51 -- # local i 00:17:50.724 15:36:20 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:50.724 15:36:20 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:50.724 15:36:20 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:50.724 15:36:20 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:50.724 15:36:20 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:50.724 15:36:20 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:50.724 15:36:20 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:50.724 15:36:20 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:50.724 15:36:21 -- bdev/nbd_common.sh@41 -- # break 00:17:50.724 15:36:21 -- bdev/nbd_common.sh@45 -- # return 0 00:17:50.724 15:36:21 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:50.724 15:36:21 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:17:51.289 15:36:21 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:51.289 15:36:21 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:51.289 15:36:21 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:51.289 15:36:21 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:51.289 15:36:21 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:51.289 15:36:21 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:51.289 15:36:21 -- bdev/nbd_common.sh@41 -- # break 00:17:51.289 15:36:21 -- bdev/nbd_common.sh@45 -- # return 0 00:17:51.289 15:36:21 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:51.289 15:36:21 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:51.289 15:36:21 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:51.289 15:36:21 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:51.289 15:36:21 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:51.289 15:36:21 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:51.548 15:36:21 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:51.548 15:36:21 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:51.548 15:36:21 -- bdev/nbd_common.sh@65 -- # echo '' 00:17:51.548 15:36:21 -- bdev/nbd_common.sh@65 -- # true 00:17:51.548 15:36:21 -- bdev/nbd_common.sh@65 -- # count=0 00:17:51.548 15:36:21 -- bdev/nbd_common.sh@66 -- # echo 0 00:17:51.548 15:36:21 -- bdev/nbd_common.sh@104 -- # count=0 00:17:51.548 15:36:21 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:17:51.548 15:36:21 -- bdev/nbd_common.sh@109 -- # return 0 00:17:51.548 15:36:21 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:17:51.805 15:36:21 -- event/event.sh@35 -- # sleep 3 00:17:52.063 [2024-04-26 15:36:22.105294] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:52.063 [2024-04-26 15:36:22.219734] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:52.063 [2024-04-26 15:36:22.219746] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:52.063 [2024-04-26 15:36:22.276448] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:17:52.063 [2024-04-26 15:36:22.276518] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:17:55.342 15:36:24 -- event/event.sh@23 -- # for i in {0..2} 00:17:55.342 spdk_app_start Round 2 00:17:55.342 15:36:24 -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:17:55.342 15:36:24 -- event/event.sh@25 -- # waitforlisten 61789 /var/tmp/spdk-nbd.sock 00:17:55.342 15:36:24 -- common/autotest_common.sh@817 -- # '[' -z 61789 ']' 00:17:55.342 15:36:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:17:55.342 15:36:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:17:55.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:17:55.342 15:36:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:17:55.342 15:36:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:17:55.342 15:36:24 -- common/autotest_common.sh@10 -- # set +x 00:17:55.342 15:36:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:17:55.342 15:36:25 -- common/autotest_common.sh@850 -- # return 0 00:17:55.342 15:36:25 -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:17:55.342 Malloc0 00:17:55.342 15:36:25 -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:17:55.599 Malloc1 00:17:55.599 15:36:25 -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:17:55.599 15:36:25 -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:55.599 15:36:25 -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:17:55.599 15:36:25 -- bdev/nbd_common.sh@91 -- # local bdev_list 00:17:55.599 15:36:25 -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:55.599 15:36:25 -- bdev/nbd_common.sh@92 -- # local nbd_list 00:17:55.599 15:36:25 -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:17:55.599 15:36:25 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:55.599 15:36:25 -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:17:55.599 15:36:25 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:55.599 15:36:25 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:55.599 15:36:25 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:55.599 15:36:25 -- bdev/nbd_common.sh@12 -- # local i 00:17:55.599 15:36:25 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:55.599 15:36:25 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:55.599 15:36:25 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:17:55.857 /dev/nbd0 00:17:55.857 15:36:25 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:55.857 15:36:25 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:55.857 15:36:25 -- common/autotest_common.sh@854 -- # local nbd_name=nbd0 00:17:55.857 15:36:25 -- common/autotest_common.sh@855 -- # local i 00:17:55.857 15:36:25 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:17:55.857 15:36:25 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:17:55.857 15:36:25 -- common/autotest_common.sh@858 -- # grep -q -w nbd0 /proc/partitions 00:17:55.857 15:36:25 -- common/autotest_common.sh@859 -- # break 00:17:55.857 15:36:25 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:17:55.857 15:36:25 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:17:55.857 15:36:25 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:17:55.857 1+0 records in 00:17:55.857 1+0 records out 00:17:55.857 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000448198 s, 9.1 MB/s 00:17:55.857 15:36:25 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:17:55.857 15:36:25 -- common/autotest_common.sh@872 -- # size=4096 00:17:55.857 15:36:25 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:17:55.857 15:36:25 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:17:55.857 15:36:25 -- common/autotest_common.sh@875 -- # return 0 00:17:55.857 15:36:25 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:55.857 15:36:25 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:55.857 15:36:25 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:17:56.114 /dev/nbd1 00:17:56.114 15:36:26 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:56.114 15:36:26 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:56.114 15:36:26 -- common/autotest_common.sh@854 -- # local nbd_name=nbd1 00:17:56.114 15:36:26 -- common/autotest_common.sh@855 -- # local i 00:17:56.114 15:36:26 -- common/autotest_common.sh@857 -- # (( i = 1 )) 00:17:56.114 15:36:26 -- common/autotest_common.sh@857 -- # (( i <= 20 )) 00:17:56.114 15:36:26 -- common/autotest_common.sh@858 -- # grep -q -w nbd1 /proc/partitions 00:17:56.114 15:36:26 -- common/autotest_common.sh@859 -- # break 00:17:56.114 15:36:26 -- common/autotest_common.sh@870 -- # (( i = 1 )) 00:17:56.114 15:36:26 -- common/autotest_common.sh@870 -- # (( i <= 20 )) 00:17:56.114 15:36:26 -- common/autotest_common.sh@871 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:17:56.114 1+0 records in 00:17:56.114 1+0 records out 00:17:56.114 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00031573 s, 13.0 MB/s 00:17:56.114 15:36:26 -- common/autotest_common.sh@872 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:17:56.114 15:36:26 -- common/autotest_common.sh@872 -- # size=4096 00:17:56.114 15:36:26 -- common/autotest_common.sh@873 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:17:56.114 15:36:26 -- common/autotest_common.sh@874 -- # '[' 4096 '!=' 0 ']' 00:17:56.114 15:36:26 -- common/autotest_common.sh@875 -- # return 0 00:17:56.114 15:36:26 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:56.114 15:36:26 -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:56.114 15:36:26 -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:56.114 15:36:26 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:56.114 15:36:26 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:56.372 15:36:26 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:17:56.372 { 00:17:56.372 "bdev_name": "Malloc0", 00:17:56.372 "nbd_device": "/dev/nbd0" 00:17:56.372 }, 00:17:56.372 { 00:17:56.372 "bdev_name": "Malloc1", 00:17:56.372 "nbd_device": "/dev/nbd1" 00:17:56.372 } 00:17:56.372 ]' 00:17:56.372 15:36:26 -- bdev/nbd_common.sh@64 -- # echo '[ 00:17:56.372 { 00:17:56.372 "bdev_name": "Malloc0", 00:17:56.372 "nbd_device": "/dev/nbd0" 00:17:56.372 }, 00:17:56.372 { 00:17:56.372 "bdev_name": "Malloc1", 00:17:56.372 "nbd_device": "/dev/nbd1" 00:17:56.372 } 00:17:56.372 ]' 00:17:56.372 15:36:26 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:56.372 15:36:26 -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:17:56.372 /dev/nbd1' 00:17:56.372 15:36:26 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:56.372 15:36:26 -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:17:56.372 /dev/nbd1' 00:17:56.372 15:36:26 -- bdev/nbd_common.sh@65 -- # count=2 00:17:56.372 15:36:26 -- bdev/nbd_common.sh@66 -- # echo 2 00:17:56.372 15:36:26 -- bdev/nbd_common.sh@95 -- # count=2 00:17:56.372 15:36:26 -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:17:56.372 15:36:26 -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:17:56.372 15:36:26 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:56.372 15:36:26 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:56.372 15:36:26 -- bdev/nbd_common.sh@71 -- # local operation=write 00:17:56.372 15:36:26 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:17:56.372 15:36:26 -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:17:56.372 15:36:26 -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:17:56.372 256+0 records in 00:17:56.372 256+0 records out 00:17:56.372 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00607701 s, 173 MB/s 00:17:56.372 15:36:26 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:56.372 15:36:26 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:17:56.629 256+0 records in 00:17:56.629 256+0 records out 00:17:56.629 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0247135 s, 42.4 MB/s 00:17:56.629 15:36:26 -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:17:56.629 15:36:26 -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:17:56.629 256+0 records in 00:17:56.629 256+0 records out 00:17:56.629 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.02621 s, 40.0 MB/s 00:17:56.629 15:36:26 -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:17:56.629 15:36:26 -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:56.629 15:36:26 -- bdev/nbd_common.sh@70 -- # local nbd_list 00:17:56.629 15:36:26 -- bdev/nbd_common.sh@71 -- # local operation=verify 00:17:56.629 15:36:26 -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:17:56.629 15:36:26 -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:17:56.629 15:36:26 -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:17:56.629 15:36:26 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:56.629 15:36:26 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:17:56.629 15:36:26 -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:17:56.629 15:36:26 -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:17:56.629 15:36:26 -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:17:56.629 15:36:26 -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:17:56.629 15:36:26 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:56.629 15:36:26 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:56.629 15:36:26 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:56.629 15:36:26 -- bdev/nbd_common.sh@51 -- # local i 00:17:56.629 15:36:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:56.629 15:36:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:17:56.886 15:36:26 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:56.886 15:36:26 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:56.886 15:36:26 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:56.886 15:36:26 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:56.886 15:36:26 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:56.886 15:36:26 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:56.886 15:36:26 -- bdev/nbd_common.sh@41 -- # break 00:17:56.886 15:36:26 -- bdev/nbd_common.sh@45 -- # return 0 00:17:56.886 15:36:26 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:56.886 15:36:26 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:17:57.143 15:36:27 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:57.143 15:36:27 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:57.143 15:36:27 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:57.143 15:36:27 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:57.143 15:36:27 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:57.143 15:36:27 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:57.143 15:36:27 -- bdev/nbd_common.sh@41 -- # break 00:17:57.143 15:36:27 -- bdev/nbd_common.sh@45 -- # return 0 00:17:57.143 15:36:27 -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:17:57.143 15:36:27 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:17:57.143 15:36:27 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:17:57.400 15:36:27 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:17:57.400 15:36:27 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:17:57.400 15:36:27 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:17:57.400 15:36:27 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:17:57.400 15:36:27 -- bdev/nbd_common.sh@65 -- # echo '' 00:17:57.400 15:36:27 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:17:57.400 15:36:27 -- bdev/nbd_common.sh@65 -- # true 00:17:57.400 15:36:27 -- bdev/nbd_common.sh@65 -- # count=0 00:17:57.400 15:36:27 -- bdev/nbd_common.sh@66 -- # echo 0 00:17:57.400 15:36:27 -- bdev/nbd_common.sh@104 -- # count=0 00:17:57.400 15:36:27 -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:17:57.400 15:36:27 -- bdev/nbd_common.sh@109 -- # return 0 00:17:57.400 15:36:27 -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:17:57.658 15:36:27 -- event/event.sh@35 -- # sleep 3 00:17:57.915 [2024-04-26 15:36:28.098984] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:17:58.173 [2024-04-26 15:36:28.211470] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:17:58.173 [2024-04-26 15:36:28.211482] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:17:58.173 [2024-04-26 15:36:28.266984] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:17:58.173 [2024-04-26 15:36:28.267046] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:18:00.699 15:36:30 -- event/event.sh@38 -- # waitforlisten 61789 /var/tmp/spdk-nbd.sock 00:18:00.699 15:36:30 -- common/autotest_common.sh@817 -- # '[' -z 61789 ']' 00:18:00.699 15:36:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:18:00.699 15:36:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:00.699 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:18:00.699 15:36:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:18:00.699 15:36:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:00.699 15:36:30 -- common/autotest_common.sh@10 -- # set +x 00:18:00.956 15:36:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:00.956 15:36:31 -- common/autotest_common.sh@850 -- # return 0 00:18:00.956 15:36:31 -- event/event.sh@39 -- # killprocess 61789 00:18:00.956 15:36:31 -- common/autotest_common.sh@936 -- # '[' -z 61789 ']' 00:18:00.956 15:36:31 -- common/autotest_common.sh@940 -- # kill -0 61789 00:18:00.956 15:36:31 -- common/autotest_common.sh@941 -- # uname 00:18:00.956 15:36:31 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:00.956 15:36:31 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 61789 00:18:00.956 15:36:31 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:00.956 15:36:31 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:00.956 killing process with pid 61789 00:18:00.956 15:36:31 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 61789' 00:18:00.956 15:36:31 -- common/autotest_common.sh@955 -- # kill 61789 00:18:00.956 15:36:31 -- common/autotest_common.sh@960 -- # wait 61789 00:18:01.215 spdk_app_start is called in Round 0. 00:18:01.215 Shutdown signal received, stop current app iteration 00:18:01.215 Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 reinitialization... 00:18:01.215 spdk_app_start is called in Round 1. 00:18:01.215 Shutdown signal received, stop current app iteration 00:18:01.215 Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 reinitialization... 00:18:01.215 spdk_app_start is called in Round 2. 00:18:01.215 Shutdown signal received, stop current app iteration 00:18:01.215 Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 reinitialization... 00:18:01.215 spdk_app_start is called in Round 3. 00:18:01.215 Shutdown signal received, stop current app iteration 00:18:01.215 15:36:31 -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:18:01.215 15:36:31 -- event/event.sh@42 -- # return 0 00:18:01.215 00:18:01.215 real 0m19.160s 00:18:01.215 user 0m42.661s 00:18:01.215 sys 0m3.084s 00:18:01.215 15:36:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:01.215 15:36:31 -- common/autotest_common.sh@10 -- # set +x 00:18:01.215 ************************************ 00:18:01.215 END TEST app_repeat 00:18:01.215 ************************************ 00:18:01.215 15:36:31 -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:18:01.215 15:36:31 -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:18:01.215 15:36:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:01.215 15:36:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:01.215 15:36:31 -- common/autotest_common.sh@10 -- # set +x 00:18:01.473 ************************************ 00:18:01.473 START TEST cpu_locks 00:18:01.473 ************************************ 00:18:01.473 15:36:31 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:18:01.473 * Looking for test storage... 00:18:01.473 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:18:01.473 15:36:31 -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:18:01.473 15:36:31 -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:18:01.473 15:36:31 -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:18:01.473 15:36:31 -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:18:01.473 15:36:31 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:01.473 15:36:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:01.473 15:36:31 -- common/autotest_common.sh@10 -- # set +x 00:18:01.473 ************************************ 00:18:01.473 START TEST default_locks 00:18:01.473 ************************************ 00:18:01.473 15:36:31 -- common/autotest_common.sh@1111 -- # default_locks 00:18:01.473 15:36:31 -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=62429 00:18:01.473 15:36:31 -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:18:01.473 15:36:31 -- event/cpu_locks.sh@47 -- # waitforlisten 62429 00:18:01.473 15:36:31 -- common/autotest_common.sh@817 -- # '[' -z 62429 ']' 00:18:01.473 15:36:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:01.473 15:36:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:01.473 15:36:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:01.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:01.473 15:36:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:01.473 15:36:31 -- common/autotest_common.sh@10 -- # set +x 00:18:01.473 [2024-04-26 15:36:31.746287] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:18:01.473 [2024-04-26 15:36:31.746387] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62429 ] 00:18:01.730 [2024-04-26 15:36:31.886095] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.730 [2024-04-26 15:36:32.003867] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:02.663 15:36:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:02.663 15:36:32 -- common/autotest_common.sh@850 -- # return 0 00:18:02.663 15:36:32 -- event/cpu_locks.sh@49 -- # locks_exist 62429 00:18:02.663 15:36:32 -- event/cpu_locks.sh@22 -- # lslocks -p 62429 00:18:02.663 15:36:32 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:18:03.228 15:36:33 -- event/cpu_locks.sh@50 -- # killprocess 62429 00:18:03.228 15:36:33 -- common/autotest_common.sh@936 -- # '[' -z 62429 ']' 00:18:03.228 15:36:33 -- common/autotest_common.sh@940 -- # kill -0 62429 00:18:03.228 15:36:33 -- common/autotest_common.sh@941 -- # uname 00:18:03.228 15:36:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:03.228 15:36:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62429 00:18:03.228 killing process with pid 62429 00:18:03.228 15:36:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:03.228 15:36:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:03.228 15:36:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62429' 00:18:03.228 15:36:33 -- common/autotest_common.sh@955 -- # kill 62429 00:18:03.228 15:36:33 -- common/autotest_common.sh@960 -- # wait 62429 00:18:03.485 15:36:33 -- event/cpu_locks.sh@52 -- # NOT waitforlisten 62429 00:18:03.485 15:36:33 -- common/autotest_common.sh@638 -- # local es=0 00:18:03.485 15:36:33 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 62429 00:18:03.485 15:36:33 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:18:03.485 15:36:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:03.485 15:36:33 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:18:03.485 15:36:33 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:03.485 15:36:33 -- common/autotest_common.sh@641 -- # waitforlisten 62429 00:18:03.485 15:36:33 -- common/autotest_common.sh@817 -- # '[' -z 62429 ']' 00:18:03.485 15:36:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:03.485 15:36:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:03.485 15:36:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:03.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:03.485 15:36:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:03.485 15:36:33 -- common/autotest_common.sh@10 -- # set +x 00:18:03.485 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (62429) - No such process 00:18:03.485 ERROR: process (pid: 62429) is no longer running 00:18:03.485 15:36:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:03.485 15:36:33 -- common/autotest_common.sh@850 -- # return 1 00:18:03.485 15:36:33 -- common/autotest_common.sh@641 -- # es=1 00:18:03.485 15:36:33 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:03.485 15:36:33 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:03.485 15:36:33 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:03.485 15:36:33 -- event/cpu_locks.sh@54 -- # no_locks 00:18:03.485 15:36:33 -- event/cpu_locks.sh@26 -- # lock_files=() 00:18:03.485 15:36:33 -- event/cpu_locks.sh@26 -- # local lock_files 00:18:03.485 15:36:33 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:18:03.485 00:18:03.485 real 0m2.002s 00:18:03.485 user 0m2.216s 00:18:03.485 sys 0m0.569s 00:18:03.485 15:36:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:03.485 15:36:33 -- common/autotest_common.sh@10 -- # set +x 00:18:03.485 ************************************ 00:18:03.485 END TEST default_locks 00:18:03.485 ************************************ 00:18:03.486 15:36:33 -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:18:03.486 15:36:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:03.486 15:36:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:03.486 15:36:33 -- common/autotest_common.sh@10 -- # set +x 00:18:03.743 ************************************ 00:18:03.743 START TEST default_locks_via_rpc 00:18:03.743 ************************************ 00:18:03.743 15:36:33 -- common/autotest_common.sh@1111 -- # default_locks_via_rpc 00:18:03.743 15:36:33 -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=62497 00:18:03.743 15:36:33 -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:18:03.743 15:36:33 -- event/cpu_locks.sh@63 -- # waitforlisten 62497 00:18:03.743 15:36:33 -- common/autotest_common.sh@817 -- # '[' -z 62497 ']' 00:18:03.743 15:36:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:03.743 15:36:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:03.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:03.743 15:36:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:03.743 15:36:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:03.743 15:36:33 -- common/autotest_common.sh@10 -- # set +x 00:18:03.743 [2024-04-26 15:36:33.849765] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:18:03.743 [2024-04-26 15:36:33.849861] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62497 ] 00:18:03.743 [2024-04-26 15:36:33.987935] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.001 [2024-04-26 15:36:34.119937] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:04.568 15:36:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:04.568 15:36:34 -- common/autotest_common.sh@850 -- # return 0 00:18:04.568 15:36:34 -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:18:04.568 15:36:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:04.568 15:36:34 -- common/autotest_common.sh@10 -- # set +x 00:18:04.568 15:36:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:04.568 15:36:34 -- event/cpu_locks.sh@67 -- # no_locks 00:18:04.568 15:36:34 -- event/cpu_locks.sh@26 -- # lock_files=() 00:18:04.568 15:36:34 -- event/cpu_locks.sh@26 -- # local lock_files 00:18:04.568 15:36:34 -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:18:04.568 15:36:34 -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:18:04.568 15:36:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:04.568 15:36:34 -- common/autotest_common.sh@10 -- # set +x 00:18:04.568 15:36:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:04.568 15:36:34 -- event/cpu_locks.sh@71 -- # locks_exist 62497 00:18:04.568 15:36:34 -- event/cpu_locks.sh@22 -- # lslocks -p 62497 00:18:04.568 15:36:34 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:18:05.135 15:36:35 -- event/cpu_locks.sh@73 -- # killprocess 62497 00:18:05.135 15:36:35 -- common/autotest_common.sh@936 -- # '[' -z 62497 ']' 00:18:05.135 15:36:35 -- common/autotest_common.sh@940 -- # kill -0 62497 00:18:05.135 15:36:35 -- common/autotest_common.sh@941 -- # uname 00:18:05.135 15:36:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:05.135 15:36:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62497 00:18:05.135 killing process with pid 62497 00:18:05.135 15:36:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:05.135 15:36:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:05.135 15:36:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62497' 00:18:05.135 15:36:35 -- common/autotest_common.sh@955 -- # kill 62497 00:18:05.135 15:36:35 -- common/autotest_common.sh@960 -- # wait 62497 00:18:05.393 00:18:05.393 real 0m1.877s 00:18:05.393 user 0m1.988s 00:18:05.393 sys 0m0.547s 00:18:05.393 15:36:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:05.393 15:36:35 -- common/autotest_common.sh@10 -- # set +x 00:18:05.393 ************************************ 00:18:05.393 END TEST default_locks_via_rpc 00:18:05.393 ************************************ 00:18:05.653 15:36:35 -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:18:05.653 15:36:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:05.653 15:36:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:05.653 15:36:35 -- common/autotest_common.sh@10 -- # set +x 00:18:05.653 ************************************ 00:18:05.653 START TEST non_locking_app_on_locked_coremask 00:18:05.653 ************************************ 00:18:05.653 15:36:35 -- common/autotest_common.sh@1111 -- # non_locking_app_on_locked_coremask 00:18:05.653 15:36:35 -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=62570 00:18:05.653 15:36:35 -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:18:05.653 15:36:35 -- event/cpu_locks.sh@81 -- # waitforlisten 62570 /var/tmp/spdk.sock 00:18:05.653 15:36:35 -- common/autotest_common.sh@817 -- # '[' -z 62570 ']' 00:18:05.653 15:36:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:05.653 15:36:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:05.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:05.653 15:36:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:05.653 15:36:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:05.653 15:36:35 -- common/autotest_common.sh@10 -- # set +x 00:18:05.653 [2024-04-26 15:36:35.846132] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:18:05.653 [2024-04-26 15:36:35.846327] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62570 ] 00:18:05.911 [2024-04-26 15:36:35.989705] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:05.911 [2024-04-26 15:36:36.117889] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:06.846 15:36:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:06.846 15:36:36 -- common/autotest_common.sh@850 -- # return 0 00:18:06.846 15:36:36 -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=62598 00:18:06.846 15:36:36 -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:18:06.846 15:36:36 -- event/cpu_locks.sh@85 -- # waitforlisten 62598 /var/tmp/spdk2.sock 00:18:06.846 15:36:36 -- common/autotest_common.sh@817 -- # '[' -z 62598 ']' 00:18:06.846 15:36:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:18:06.846 15:36:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:06.846 15:36:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:18:06.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:18:06.846 15:36:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:06.846 15:36:36 -- common/autotest_common.sh@10 -- # set +x 00:18:06.846 [2024-04-26 15:36:36.877183] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:18:06.846 [2024-04-26 15:36:36.877284] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62598 ] 00:18:06.846 [2024-04-26 15:36:37.019417] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:18:06.846 [2024-04-26 15:36:37.019496] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:07.105 [2024-04-26 15:36:37.252023] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:07.671 15:36:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:07.671 15:36:37 -- common/autotest_common.sh@850 -- # return 0 00:18:07.671 15:36:37 -- event/cpu_locks.sh@87 -- # locks_exist 62570 00:18:07.671 15:36:37 -- event/cpu_locks.sh@22 -- # lslocks -p 62570 00:18:07.671 15:36:37 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:18:08.237 15:36:38 -- event/cpu_locks.sh@89 -- # killprocess 62570 00:18:08.237 15:36:38 -- common/autotest_common.sh@936 -- # '[' -z 62570 ']' 00:18:08.237 15:36:38 -- common/autotest_common.sh@940 -- # kill -0 62570 00:18:08.237 15:36:38 -- common/autotest_common.sh@941 -- # uname 00:18:08.237 15:36:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:08.237 15:36:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62570 00:18:08.495 15:36:38 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:08.495 15:36:38 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:08.495 killing process with pid 62570 00:18:08.495 15:36:38 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62570' 00:18:08.495 15:36:38 -- common/autotest_common.sh@955 -- # kill 62570 00:18:08.495 15:36:38 -- common/autotest_common.sh@960 -- # wait 62570 00:18:09.429 15:36:39 -- event/cpu_locks.sh@90 -- # killprocess 62598 00:18:09.429 15:36:39 -- common/autotest_common.sh@936 -- # '[' -z 62598 ']' 00:18:09.429 15:36:39 -- common/autotest_common.sh@940 -- # kill -0 62598 00:18:09.429 15:36:39 -- common/autotest_common.sh@941 -- # uname 00:18:09.429 15:36:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:09.429 15:36:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62598 00:18:09.429 15:36:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:09.429 15:36:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:09.430 15:36:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62598' 00:18:09.430 killing process with pid 62598 00:18:09.430 15:36:39 -- common/autotest_common.sh@955 -- # kill 62598 00:18:09.430 15:36:39 -- common/autotest_common.sh@960 -- # wait 62598 00:18:09.688 00:18:09.688 real 0m4.046s 00:18:09.688 user 0m4.496s 00:18:09.688 sys 0m1.041s 00:18:09.688 15:36:39 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:09.688 15:36:39 -- common/autotest_common.sh@10 -- # set +x 00:18:09.688 ************************************ 00:18:09.688 END TEST non_locking_app_on_locked_coremask 00:18:09.688 ************************************ 00:18:09.688 15:36:39 -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:18:09.688 15:36:39 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:09.688 15:36:39 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:09.688 15:36:39 -- common/autotest_common.sh@10 -- # set +x 00:18:09.688 ************************************ 00:18:09.688 START TEST locking_app_on_unlocked_coremask 00:18:09.688 ************************************ 00:18:09.688 15:36:39 -- common/autotest_common.sh@1111 -- # locking_app_on_unlocked_coremask 00:18:09.688 15:36:39 -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:18:09.688 15:36:39 -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=62682 00:18:09.688 15:36:39 -- event/cpu_locks.sh@99 -- # waitforlisten 62682 /var/tmp/spdk.sock 00:18:09.688 15:36:39 -- common/autotest_common.sh@817 -- # '[' -z 62682 ']' 00:18:09.688 15:36:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:09.688 15:36:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:09.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:09.688 15:36:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:09.688 15:36:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:09.688 15:36:39 -- common/autotest_common.sh@10 -- # set +x 00:18:09.946 [2024-04-26 15:36:40.004412] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:18:09.946 [2024-04-26 15:36:40.004518] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62682 ] 00:18:09.946 [2024-04-26 15:36:40.144798] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:18:09.946 [2024-04-26 15:36:40.144884] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:10.204 [2024-04-26 15:36:40.277675] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:10.769 15:36:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:10.769 15:36:40 -- common/autotest_common.sh@850 -- # return 0 00:18:10.769 15:36:40 -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=62710 00:18:10.769 15:36:40 -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:18:10.769 15:36:40 -- event/cpu_locks.sh@103 -- # waitforlisten 62710 /var/tmp/spdk2.sock 00:18:10.769 15:36:40 -- common/autotest_common.sh@817 -- # '[' -z 62710 ']' 00:18:10.769 15:36:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:18:10.769 15:36:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:10.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:18:10.769 15:36:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:18:10.769 15:36:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:10.769 15:36:40 -- common/autotest_common.sh@10 -- # set +x 00:18:10.769 [2024-04-26 15:36:41.020098] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:18:10.769 [2024-04-26 15:36:41.020215] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62710 ] 00:18:11.027 [2024-04-26 15:36:41.164053] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.285 [2024-04-26 15:36:41.405584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:11.852 15:36:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:11.852 15:36:41 -- common/autotest_common.sh@850 -- # return 0 00:18:11.852 15:36:41 -- event/cpu_locks.sh@105 -- # locks_exist 62710 00:18:11.852 15:36:41 -- event/cpu_locks.sh@22 -- # lslocks -p 62710 00:18:11.852 15:36:41 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:18:12.419 15:36:42 -- event/cpu_locks.sh@107 -- # killprocess 62682 00:18:12.419 15:36:42 -- common/autotest_common.sh@936 -- # '[' -z 62682 ']' 00:18:12.419 15:36:42 -- common/autotest_common.sh@940 -- # kill -0 62682 00:18:12.419 15:36:42 -- common/autotest_common.sh@941 -- # uname 00:18:12.419 15:36:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:12.419 15:36:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62682 00:18:12.419 15:36:42 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:12.419 killing process with pid 62682 00:18:12.419 15:36:42 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:12.419 15:36:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62682' 00:18:12.419 15:36:42 -- common/autotest_common.sh@955 -- # kill 62682 00:18:12.419 15:36:42 -- common/autotest_common.sh@960 -- # wait 62682 00:18:13.380 15:36:43 -- event/cpu_locks.sh@108 -- # killprocess 62710 00:18:13.380 15:36:43 -- common/autotest_common.sh@936 -- # '[' -z 62710 ']' 00:18:13.380 15:36:43 -- common/autotest_common.sh@940 -- # kill -0 62710 00:18:13.380 15:36:43 -- common/autotest_common.sh@941 -- # uname 00:18:13.380 15:36:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:13.380 15:36:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62710 00:18:13.380 15:36:43 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:13.380 killing process with pid 62710 00:18:13.380 15:36:43 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:13.380 15:36:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62710' 00:18:13.380 15:36:43 -- common/autotest_common.sh@955 -- # kill 62710 00:18:13.380 15:36:43 -- common/autotest_common.sh@960 -- # wait 62710 00:18:13.949 00:18:13.949 real 0m4.076s 00:18:13.949 user 0m4.514s 00:18:13.949 sys 0m1.038s 00:18:13.949 15:36:44 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:13.949 15:36:44 -- common/autotest_common.sh@10 -- # set +x 00:18:13.949 ************************************ 00:18:13.949 END TEST locking_app_on_unlocked_coremask 00:18:13.949 ************************************ 00:18:13.949 15:36:44 -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:18:13.949 15:36:44 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:13.949 15:36:44 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:13.949 15:36:44 -- common/autotest_common.sh@10 -- # set +x 00:18:13.949 ************************************ 00:18:13.949 START TEST locking_app_on_locked_coremask 00:18:13.949 ************************************ 00:18:13.949 15:36:44 -- common/autotest_common.sh@1111 -- # locking_app_on_locked_coremask 00:18:13.949 15:36:44 -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=62794 00:18:13.949 15:36:44 -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:18:13.949 15:36:44 -- event/cpu_locks.sh@116 -- # waitforlisten 62794 /var/tmp/spdk.sock 00:18:13.949 15:36:44 -- common/autotest_common.sh@817 -- # '[' -z 62794 ']' 00:18:13.949 15:36:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:13.949 15:36:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:13.949 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:13.949 15:36:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:13.949 15:36:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:13.949 15:36:44 -- common/autotest_common.sh@10 -- # set +x 00:18:13.949 [2024-04-26 15:36:44.196881] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:18:13.949 [2024-04-26 15:36:44.196981] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62794 ] 00:18:14.208 [2024-04-26 15:36:44.332484] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:14.208 [2024-04-26 15:36:44.450763] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:15.144 15:36:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:15.144 15:36:45 -- common/autotest_common.sh@850 -- # return 0 00:18:15.144 15:36:45 -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=62822 00:18:15.144 15:36:45 -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:18:15.144 15:36:45 -- event/cpu_locks.sh@120 -- # NOT waitforlisten 62822 /var/tmp/spdk2.sock 00:18:15.144 15:36:45 -- common/autotest_common.sh@638 -- # local es=0 00:18:15.144 15:36:45 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 62822 /var/tmp/spdk2.sock 00:18:15.144 15:36:45 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:18:15.144 15:36:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:15.144 15:36:45 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:18:15.145 15:36:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:15.145 15:36:45 -- common/autotest_common.sh@641 -- # waitforlisten 62822 /var/tmp/spdk2.sock 00:18:15.145 15:36:45 -- common/autotest_common.sh@817 -- # '[' -z 62822 ']' 00:18:15.145 15:36:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:18:15.145 15:36:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:15.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:18:15.145 15:36:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:18:15.145 15:36:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:15.145 15:36:45 -- common/autotest_common.sh@10 -- # set +x 00:18:15.145 [2024-04-26 15:36:45.203569] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:18:15.145 [2024-04-26 15:36:45.203677] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62822 ] 00:18:15.145 [2024-04-26 15:36:45.344665] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 62794 has claimed it. 00:18:15.145 [2024-04-26 15:36:45.344772] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:18:15.712 ERROR: process (pid: 62822) is no longer running 00:18:15.712 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (62822) - No such process 00:18:15.712 15:36:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:15.712 15:36:45 -- common/autotest_common.sh@850 -- # return 1 00:18:15.712 15:36:45 -- common/autotest_common.sh@641 -- # es=1 00:18:15.712 15:36:45 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:15.712 15:36:45 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:15.712 15:36:45 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:15.712 15:36:45 -- event/cpu_locks.sh@122 -- # locks_exist 62794 00:18:15.712 15:36:45 -- event/cpu_locks.sh@22 -- # lslocks -p 62794 00:18:15.712 15:36:45 -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:18:16.280 15:36:46 -- event/cpu_locks.sh@124 -- # killprocess 62794 00:18:16.280 15:36:46 -- common/autotest_common.sh@936 -- # '[' -z 62794 ']' 00:18:16.280 15:36:46 -- common/autotest_common.sh@940 -- # kill -0 62794 00:18:16.280 15:36:46 -- common/autotest_common.sh@941 -- # uname 00:18:16.280 15:36:46 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:16.280 15:36:46 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62794 00:18:16.280 15:36:46 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:16.280 killing process with pid 62794 00:18:16.280 15:36:46 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:16.280 15:36:46 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62794' 00:18:16.280 15:36:46 -- common/autotest_common.sh@955 -- # kill 62794 00:18:16.280 15:36:46 -- common/autotest_common.sh@960 -- # wait 62794 00:18:16.849 00:18:16.849 real 0m2.706s 00:18:16.849 user 0m3.089s 00:18:16.849 sys 0m0.681s 00:18:16.849 ************************************ 00:18:16.849 END TEST locking_app_on_locked_coremask 00:18:16.849 ************************************ 00:18:16.849 15:36:46 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:16.849 15:36:46 -- common/autotest_common.sh@10 -- # set +x 00:18:16.849 15:36:46 -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:18:16.849 15:36:46 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:16.849 15:36:46 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:16.849 15:36:46 -- common/autotest_common.sh@10 -- # set +x 00:18:16.849 ************************************ 00:18:16.849 START TEST locking_overlapped_coremask 00:18:16.849 ************************************ 00:18:16.849 15:36:46 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask 00:18:16.849 15:36:46 -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=62883 00:18:16.849 15:36:46 -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:18:16.849 15:36:46 -- event/cpu_locks.sh@133 -- # waitforlisten 62883 /var/tmp/spdk.sock 00:18:16.849 15:36:46 -- common/autotest_common.sh@817 -- # '[' -z 62883 ']' 00:18:16.849 15:36:46 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:16.849 15:36:46 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:16.849 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:16.849 15:36:46 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:16.849 15:36:46 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:16.849 15:36:46 -- common/autotest_common.sh@10 -- # set +x 00:18:16.849 [2024-04-26 15:36:47.013664] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:18:16.849 [2024-04-26 15:36:47.013775] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62883 ] 00:18:17.108 [2024-04-26 15:36:47.143031] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:17.108 [2024-04-26 15:36:47.248680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:17.108 [2024-04-26 15:36:47.248845] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:17.108 [2024-04-26 15:36:47.248847] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:18.041 15:36:47 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:18.041 15:36:47 -- common/autotest_common.sh@850 -- # return 0 00:18:18.041 15:36:47 -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=62912 00:18:18.041 15:36:47 -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:18:18.041 15:36:47 -- event/cpu_locks.sh@137 -- # NOT waitforlisten 62912 /var/tmp/spdk2.sock 00:18:18.041 15:36:47 -- common/autotest_common.sh@638 -- # local es=0 00:18:18.041 15:36:47 -- common/autotest_common.sh@640 -- # valid_exec_arg waitforlisten 62912 /var/tmp/spdk2.sock 00:18:18.041 15:36:47 -- common/autotest_common.sh@626 -- # local arg=waitforlisten 00:18:18.041 15:36:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:18.041 15:36:47 -- common/autotest_common.sh@630 -- # type -t waitforlisten 00:18:18.041 15:36:47 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:18.041 15:36:47 -- common/autotest_common.sh@641 -- # waitforlisten 62912 /var/tmp/spdk2.sock 00:18:18.041 15:36:47 -- common/autotest_common.sh@817 -- # '[' -z 62912 ']' 00:18:18.041 15:36:47 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:18:18.041 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:18:18.041 15:36:47 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:18.041 15:36:47 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:18:18.041 15:36:47 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:18.041 15:36:47 -- common/autotest_common.sh@10 -- # set +x 00:18:18.041 [2024-04-26 15:36:48.035077] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:18:18.041 [2024-04-26 15:36:48.035175] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62912 ] 00:18:18.041 [2024-04-26 15:36:48.177022] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 62883 has claimed it. 00:18:18.041 [2024-04-26 15:36:48.177087] app.c: 821:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:18:18.606 ERROR: process (pid: 62912) is no longer running 00:18:18.606 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 832: kill: (62912) - No such process 00:18:18.606 15:36:48 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:18.606 15:36:48 -- common/autotest_common.sh@850 -- # return 1 00:18:18.607 15:36:48 -- common/autotest_common.sh@641 -- # es=1 00:18:18.607 15:36:48 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:18.607 15:36:48 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:18.607 15:36:48 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:18.607 15:36:48 -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:18:18.607 15:36:48 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:18:18.607 15:36:48 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:18:18.607 15:36:48 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:18:18.607 15:36:48 -- event/cpu_locks.sh@141 -- # killprocess 62883 00:18:18.607 15:36:48 -- common/autotest_common.sh@936 -- # '[' -z 62883 ']' 00:18:18.607 15:36:48 -- common/autotest_common.sh@940 -- # kill -0 62883 00:18:18.607 15:36:48 -- common/autotest_common.sh@941 -- # uname 00:18:18.607 15:36:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:18.607 15:36:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62883 00:18:18.607 15:36:48 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:18.607 killing process with pid 62883 00:18:18.607 15:36:48 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:18.607 15:36:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62883' 00:18:18.607 15:36:48 -- common/autotest_common.sh@955 -- # kill 62883 00:18:18.607 15:36:48 -- common/autotest_common.sh@960 -- # wait 62883 00:18:19.173 00:18:19.173 real 0m2.282s 00:18:19.173 user 0m6.330s 00:18:19.173 sys 0m0.429s 00:18:19.173 15:36:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:19.173 15:36:49 -- common/autotest_common.sh@10 -- # set +x 00:18:19.173 ************************************ 00:18:19.173 END TEST locking_overlapped_coremask 00:18:19.173 ************************************ 00:18:19.173 15:36:49 -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:18:19.173 15:36:49 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:19.173 15:36:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:19.173 15:36:49 -- common/autotest_common.sh@10 -- # set +x 00:18:19.173 ************************************ 00:18:19.173 START TEST locking_overlapped_coremask_via_rpc 00:18:19.173 ************************************ 00:18:19.173 15:36:49 -- common/autotest_common.sh@1111 -- # locking_overlapped_coremask_via_rpc 00:18:19.173 15:36:49 -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=62963 00:18:19.173 15:36:49 -- event/cpu_locks.sh@149 -- # waitforlisten 62963 /var/tmp/spdk.sock 00:18:19.173 15:36:49 -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:18:19.173 15:36:49 -- common/autotest_common.sh@817 -- # '[' -z 62963 ']' 00:18:19.173 15:36:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:19.173 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:19.173 15:36:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:19.173 15:36:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:19.173 15:36:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:19.173 15:36:49 -- common/autotest_common.sh@10 -- # set +x 00:18:19.173 [2024-04-26 15:36:49.416035] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:18:19.173 [2024-04-26 15:36:49.416425] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62963 ] 00:18:19.430 [2024-04-26 15:36:49.551800] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:18:19.430 [2024-04-26 15:36:49.551849] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:19.430 [2024-04-26 15:36:49.673097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:19.430 [2024-04-26 15:36:49.673243] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:19.430 [2024-04-26 15:36:49.673248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:20.364 15:36:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:20.364 15:36:50 -- common/autotest_common.sh@850 -- # return 0 00:18:20.364 15:36:50 -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=62993 00:18:20.364 15:36:50 -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:18:20.364 15:36:50 -- event/cpu_locks.sh@153 -- # waitforlisten 62993 /var/tmp/spdk2.sock 00:18:20.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:18:20.364 15:36:50 -- common/autotest_common.sh@817 -- # '[' -z 62993 ']' 00:18:20.364 15:36:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:18:20.364 15:36:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:20.364 15:36:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:18:20.364 15:36:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:20.364 15:36:50 -- common/autotest_common.sh@10 -- # set +x 00:18:20.364 [2024-04-26 15:36:50.437158] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:18:20.364 [2024-04-26 15:36:50.437948] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62993 ] 00:18:20.364 [2024-04-26 15:36:50.585478] app.c: 825:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:18:20.364 [2024-04-26 15:36:50.585549] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:20.621 [2024-04-26 15:36:50.822723] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:20.621 [2024-04-26 15:36:50.826271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:20.621 [2024-04-26 15:36:50.826271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:18:21.200 15:36:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:21.200 15:36:51 -- common/autotest_common.sh@850 -- # return 0 00:18:21.200 15:36:51 -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:18:21.200 15:36:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:21.200 15:36:51 -- common/autotest_common.sh@10 -- # set +x 00:18:21.200 15:36:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:21.200 15:36:51 -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:18:21.200 15:36:51 -- common/autotest_common.sh@638 -- # local es=0 00:18:21.200 15:36:51 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:18:21.200 15:36:51 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:18:21.200 15:36:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:21.200 15:36:51 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:18:21.200 15:36:51 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:21.200 15:36:51 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:18:21.200 15:36:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:21.200 15:36:51 -- common/autotest_common.sh@10 -- # set +x 00:18:21.200 [2024-04-26 15:36:51.475264] app.c: 690:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 62963 has claimed it. 00:18:21.200 2024/04/26 15:36:51 error on JSON-RPC call, method: framework_enable_cpumask_locks, params: map[], err: error received for framework_enable_cpumask_locks method, err: Code=-32603 Msg=Failed to claim CPU core: 2 00:18:21.200 request: 00:18:21.200 { 00:18:21.200 "method": "framework_enable_cpumask_locks", 00:18:21.200 "params": {} 00:18:21.200 } 00:18:21.200 Got JSON-RPC error response 00:18:21.200 GoRPCClient: error on JSON-RPC call 00:18:21.200 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:21.200 15:36:51 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:18:21.200 15:36:51 -- common/autotest_common.sh@641 -- # es=1 00:18:21.200 15:36:51 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:21.200 15:36:51 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:21.200 15:36:51 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:21.200 15:36:51 -- event/cpu_locks.sh@158 -- # waitforlisten 62963 /var/tmp/spdk.sock 00:18:21.200 15:36:51 -- common/autotest_common.sh@817 -- # '[' -z 62963 ']' 00:18:21.200 15:36:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:21.200 15:36:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:21.200 15:36:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:21.200 15:36:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:21.200 15:36:51 -- common/autotest_common.sh@10 -- # set +x 00:18:21.765 15:36:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:21.765 15:36:51 -- common/autotest_common.sh@850 -- # return 0 00:18:21.765 15:36:51 -- event/cpu_locks.sh@159 -- # waitforlisten 62993 /var/tmp/spdk2.sock 00:18:21.765 15:36:51 -- common/autotest_common.sh@817 -- # '[' -z 62993 ']' 00:18:21.765 15:36:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk2.sock 00:18:21.765 15:36:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:21.765 15:36:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:18:21.765 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:18:21.765 15:36:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:21.765 15:36:51 -- common/autotest_common.sh@10 -- # set +x 00:18:21.765 15:36:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:21.765 15:36:52 -- common/autotest_common.sh@850 -- # return 0 00:18:21.765 15:36:52 -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:18:21.765 15:36:52 -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:18:22.024 15:36:52 -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:18:22.024 15:36:52 -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:18:22.024 ************************************ 00:18:22.024 END TEST locking_overlapped_coremask_via_rpc 00:18:22.024 ************************************ 00:18:22.024 00:18:22.024 real 0m2.703s 00:18:22.024 user 0m1.409s 00:18:22.024 sys 0m0.214s 00:18:22.024 15:36:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:22.024 15:36:52 -- common/autotest_common.sh@10 -- # set +x 00:18:22.024 15:36:52 -- event/cpu_locks.sh@174 -- # cleanup 00:18:22.024 15:36:52 -- event/cpu_locks.sh@15 -- # [[ -z 62963 ]] 00:18:22.024 15:36:52 -- event/cpu_locks.sh@15 -- # killprocess 62963 00:18:22.024 15:36:52 -- common/autotest_common.sh@936 -- # '[' -z 62963 ']' 00:18:22.024 15:36:52 -- common/autotest_common.sh@940 -- # kill -0 62963 00:18:22.024 15:36:52 -- common/autotest_common.sh@941 -- # uname 00:18:22.024 15:36:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:22.024 15:36:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62963 00:18:22.024 killing process with pid 62963 00:18:22.024 15:36:52 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:22.024 15:36:52 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:22.024 15:36:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62963' 00:18:22.024 15:36:52 -- common/autotest_common.sh@955 -- # kill 62963 00:18:22.024 15:36:52 -- common/autotest_common.sh@960 -- # wait 62963 00:18:22.282 15:36:52 -- event/cpu_locks.sh@16 -- # [[ -z 62993 ]] 00:18:22.282 15:36:52 -- event/cpu_locks.sh@16 -- # killprocess 62993 00:18:22.282 15:36:52 -- common/autotest_common.sh@936 -- # '[' -z 62993 ']' 00:18:22.282 15:36:52 -- common/autotest_common.sh@940 -- # kill -0 62993 00:18:22.282 15:36:52 -- common/autotest_common.sh@941 -- # uname 00:18:22.282 15:36:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:22.282 15:36:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 62993 00:18:22.282 killing process with pid 62993 00:18:22.282 15:36:52 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:18:22.282 15:36:52 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:18:22.282 15:36:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 62993' 00:18:22.282 15:36:52 -- common/autotest_common.sh@955 -- # kill 62993 00:18:22.282 15:36:52 -- common/autotest_common.sh@960 -- # wait 62993 00:18:22.850 15:36:53 -- event/cpu_locks.sh@18 -- # rm -f 00:18:22.850 15:36:53 -- event/cpu_locks.sh@1 -- # cleanup 00:18:22.850 15:36:53 -- event/cpu_locks.sh@15 -- # [[ -z 62963 ]] 00:18:22.850 15:36:53 -- event/cpu_locks.sh@15 -- # killprocess 62963 00:18:22.850 15:36:53 -- common/autotest_common.sh@936 -- # '[' -z 62963 ']' 00:18:22.850 15:36:53 -- common/autotest_common.sh@940 -- # kill -0 62963 00:18:22.850 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (62963) - No such process 00:18:22.850 Process with pid 62963 is not found 00:18:22.850 Process with pid 62993 is not found 00:18:22.850 15:36:53 -- common/autotest_common.sh@963 -- # echo 'Process with pid 62963 is not found' 00:18:22.850 15:36:53 -- event/cpu_locks.sh@16 -- # [[ -z 62993 ]] 00:18:22.850 15:36:53 -- event/cpu_locks.sh@16 -- # killprocess 62993 00:18:22.851 15:36:53 -- common/autotest_common.sh@936 -- # '[' -z 62993 ']' 00:18:22.851 15:36:53 -- common/autotest_common.sh@940 -- # kill -0 62993 00:18:22.851 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (62993) - No such process 00:18:22.851 15:36:53 -- common/autotest_common.sh@963 -- # echo 'Process with pid 62993 is not found' 00:18:22.851 15:36:53 -- event/cpu_locks.sh@18 -- # rm -f 00:18:22.851 ************************************ 00:18:22.851 END TEST cpu_locks 00:18:22.851 ************************************ 00:18:22.851 00:18:22.851 real 0m21.486s 00:18:22.851 user 0m37.061s 00:18:22.851 sys 0m5.541s 00:18:22.851 15:36:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:22.851 15:36:53 -- common/autotest_common.sh@10 -- # set +x 00:18:22.851 ************************************ 00:18:22.851 END TEST event 00:18:22.851 ************************************ 00:18:22.851 00:18:22.851 real 0m50.669s 00:18:22.851 user 1m36.665s 00:18:22.851 sys 0m9.610s 00:18:22.851 15:36:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:22.851 15:36:53 -- common/autotest_common.sh@10 -- # set +x 00:18:22.851 15:36:53 -- spdk/autotest.sh@178 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:18:22.851 15:36:53 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:22.851 15:36:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:22.851 15:36:53 -- common/autotest_common.sh@10 -- # set +x 00:18:23.111 ************************************ 00:18:23.111 START TEST thread 00:18:23.111 ************************************ 00:18:23.111 15:36:53 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:18:23.111 * Looking for test storage... 00:18:23.111 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:18:23.111 15:36:53 -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:18:23.111 15:36:53 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:18:23.111 15:36:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:23.111 15:36:53 -- common/autotest_common.sh@10 -- # set +x 00:18:23.111 ************************************ 00:18:23.111 START TEST thread_poller_perf 00:18:23.111 ************************************ 00:18:23.111 15:36:53 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:18:23.111 [2024-04-26 15:36:53.343370] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:18:23.111 [2024-04-26 15:36:53.343623] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63155 ] 00:18:23.369 [2024-04-26 15:36:53.483916] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:23.369 [2024-04-26 15:36:53.610303] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:23.369 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:18:24.744 ====================================== 00:18:24.744 busy:2210832818 (cyc) 00:18:24.744 total_run_count: 311000 00:18:24.744 tsc_hz: 2200000000 (cyc) 00:18:24.744 ====================================== 00:18:24.744 poller_cost: 7108 (cyc), 3230 (nsec) 00:18:24.744 ************************************ 00:18:24.744 END TEST thread_poller_perf 00:18:24.744 ************************************ 00:18:24.744 00:18:24.744 real 0m1.400s 00:18:24.744 user 0m1.231s 00:18:24.744 sys 0m0.062s 00:18:24.744 15:36:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:24.744 15:36:54 -- common/autotest_common.sh@10 -- # set +x 00:18:24.744 15:36:54 -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:18:24.744 15:36:54 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:18:24.744 15:36:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:24.744 15:36:54 -- common/autotest_common.sh@10 -- # set +x 00:18:24.744 ************************************ 00:18:24.744 START TEST thread_poller_perf 00:18:24.744 ************************************ 00:18:24.744 15:36:54 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:18:24.744 [2024-04-26 15:36:54.867878] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:18:24.744 [2024-04-26 15:36:54.867948] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63195 ] 00:18:24.744 [2024-04-26 15:36:55.003775] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:25.005 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:18:25.005 [2024-04-26 15:36:55.106364] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:25.950 ====================================== 00:18:25.950 busy:2202728698 (cyc) 00:18:25.950 total_run_count: 4327000 00:18:25.950 tsc_hz: 2200000000 (cyc) 00:18:25.950 ====================================== 00:18:25.950 poller_cost: 509 (cyc), 231 (nsec) 00:18:25.950 ************************************ 00:18:25.950 END TEST thread_poller_perf 00:18:25.950 ************************************ 00:18:25.950 00:18:25.950 real 0m1.375s 00:18:25.950 user 0m1.214s 00:18:25.950 sys 0m0.055s 00:18:25.950 15:36:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:25.950 15:36:56 -- common/autotest_common.sh@10 -- # set +x 00:18:26.208 15:36:56 -- thread/thread.sh@17 -- # [[ y != \y ]] 00:18:26.208 ************************************ 00:18:26.208 END TEST thread 00:18:26.208 ************************************ 00:18:26.208 00:18:26.208 real 0m3.100s 00:18:26.208 user 0m2.556s 00:18:26.208 sys 0m0.297s 00:18:26.208 15:36:56 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:26.208 15:36:56 -- common/autotest_common.sh@10 -- # set +x 00:18:26.208 15:36:56 -- spdk/autotest.sh@179 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:18:26.208 15:36:56 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:18:26.208 15:36:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:26.208 15:36:56 -- common/autotest_common.sh@10 -- # set +x 00:18:26.208 ************************************ 00:18:26.208 START TEST accel 00:18:26.208 ************************************ 00:18:26.208 15:36:56 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:18:26.208 * Looking for test storage... 00:18:26.208 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:18:26.208 15:36:56 -- accel/accel.sh@81 -- # declare -A expected_opcs 00:18:26.208 15:36:56 -- accel/accel.sh@82 -- # get_expected_opcs 00:18:26.208 15:36:56 -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:18:26.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:26.208 15:36:56 -- accel/accel.sh@62 -- # spdk_tgt_pid=63274 00:18:26.208 15:36:56 -- accel/accel.sh@63 -- # waitforlisten 63274 00:18:26.208 15:36:56 -- common/autotest_common.sh@817 -- # '[' -z 63274 ']' 00:18:26.208 15:36:56 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:26.208 15:36:56 -- common/autotest_common.sh@822 -- # local max_retries=100 00:18:26.208 15:36:56 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:26.208 15:36:56 -- common/autotest_common.sh@826 -- # xtrace_disable 00:18:26.208 15:36:56 -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:18:26.208 15:36:56 -- accel/accel.sh@61 -- # build_accel_config 00:18:26.208 15:36:56 -- common/autotest_common.sh@10 -- # set +x 00:18:26.208 15:36:56 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:26.208 15:36:56 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:26.208 15:36:56 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:26.208 15:36:56 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:26.208 15:36:56 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:26.208 15:36:56 -- accel/accel.sh@40 -- # local IFS=, 00:18:26.208 15:36:56 -- accel/accel.sh@41 -- # jq -r . 00:18:26.466 [2024-04-26 15:36:56.523592] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:18:26.466 [2024-04-26 15:36:56.523678] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63274 ] 00:18:26.466 [2024-04-26 15:36:56.660007] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.724 [2024-04-26 15:36:56.789016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:27.290 15:36:57 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:18:27.290 15:36:57 -- common/autotest_common.sh@850 -- # return 0 00:18:27.290 15:36:57 -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:18:27.291 15:36:57 -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:18:27.291 15:36:57 -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:18:27.291 15:36:57 -- accel/accel.sh@68 -- # [[ -n '' ]] 00:18:27.291 15:36:57 -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:18:27.291 15:36:57 -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:18:27.291 15:36:57 -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:18:27.291 15:36:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:18:27.291 15:36:57 -- common/autotest_common.sh@10 -- # set +x 00:18:27.291 15:36:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:18:27.291 15:36:57 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:18:27.291 15:36:57 -- accel/accel.sh@72 -- # IFS== 00:18:27.291 15:36:57 -- accel/accel.sh@72 -- # read -r opc module 00:18:27.291 15:36:57 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:18:27.291 15:36:57 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:18:27.291 15:36:57 -- accel/accel.sh@72 -- # IFS== 00:18:27.291 15:36:57 -- accel/accel.sh@72 -- # read -r opc module 00:18:27.291 15:36:57 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:18:27.291 15:36:57 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:18:27.291 15:36:57 -- accel/accel.sh@72 -- # IFS== 00:18:27.291 15:36:57 -- accel/accel.sh@72 -- # read -r opc module 00:18:27.291 15:36:57 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:18:27.291 15:36:57 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:18:27.291 15:36:57 -- accel/accel.sh@72 -- # IFS== 00:18:27.291 15:36:57 -- accel/accel.sh@72 -- # read -r opc module 00:18:27.291 15:36:57 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:18:27.291 15:36:57 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:18:27.291 15:36:57 -- accel/accel.sh@72 -- # IFS== 00:18:27.291 15:36:57 -- accel/accel.sh@72 -- # read -r opc module 00:18:27.549 15:36:57 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:18:27.549 15:36:57 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:18:27.549 15:36:57 -- accel/accel.sh@72 -- # IFS== 00:18:27.549 15:36:57 -- accel/accel.sh@72 -- # read -r opc module 00:18:27.549 15:36:57 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:18:27.549 15:36:57 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:18:27.549 15:36:57 -- accel/accel.sh@72 -- # IFS== 00:18:27.549 15:36:57 -- accel/accel.sh@72 -- # read -r opc module 00:18:27.549 15:36:57 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:18:27.549 15:36:57 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:18:27.549 15:36:57 -- accel/accel.sh@72 -- # IFS== 00:18:27.549 15:36:57 -- accel/accel.sh@72 -- # read -r opc module 00:18:27.549 15:36:57 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:18:27.549 15:36:57 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:18:27.549 15:36:57 -- accel/accel.sh@72 -- # IFS== 00:18:27.549 15:36:57 -- accel/accel.sh@72 -- # read -r opc module 00:18:27.549 15:36:57 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:18:27.549 15:36:57 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:18:27.549 15:36:57 -- accel/accel.sh@72 -- # IFS== 00:18:27.549 15:36:57 -- accel/accel.sh@72 -- # read -r opc module 00:18:27.549 15:36:57 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:18:27.549 15:36:57 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:18:27.549 15:36:57 -- accel/accel.sh@72 -- # IFS== 00:18:27.549 15:36:57 -- accel/accel.sh@72 -- # read -r opc module 00:18:27.549 15:36:57 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:18:27.549 15:36:57 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:18:27.549 15:36:57 -- accel/accel.sh@72 -- # IFS== 00:18:27.549 15:36:57 -- accel/accel.sh@72 -- # read -r opc module 00:18:27.549 15:36:57 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:18:27.549 15:36:57 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:18:27.549 15:36:57 -- accel/accel.sh@72 -- # IFS== 00:18:27.549 15:36:57 -- accel/accel.sh@72 -- # read -r opc module 00:18:27.549 15:36:57 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:18:27.549 15:36:57 -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:18:27.549 15:36:57 -- accel/accel.sh@72 -- # IFS== 00:18:27.549 15:36:57 -- accel/accel.sh@72 -- # read -r opc module 00:18:27.549 15:36:57 -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:18:27.549 15:36:57 -- accel/accel.sh@75 -- # killprocess 63274 00:18:27.549 15:36:57 -- common/autotest_common.sh@936 -- # '[' -z 63274 ']' 00:18:27.549 15:36:57 -- common/autotest_common.sh@940 -- # kill -0 63274 00:18:27.549 15:36:57 -- common/autotest_common.sh@941 -- # uname 00:18:27.549 15:36:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:18:27.549 15:36:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 63274 00:18:27.549 killing process with pid 63274 00:18:27.549 15:36:57 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:18:27.549 15:36:57 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:18:27.549 15:36:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 63274' 00:18:27.549 15:36:57 -- common/autotest_common.sh@955 -- # kill 63274 00:18:27.549 15:36:57 -- common/autotest_common.sh@960 -- # wait 63274 00:18:27.807 15:36:58 -- accel/accel.sh@76 -- # trap - ERR 00:18:27.807 15:36:58 -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:18:27.807 15:36:58 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:18:27.807 15:36:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:27.807 15:36:58 -- common/autotest_common.sh@10 -- # set +x 00:18:28.066 15:36:58 -- common/autotest_common.sh@1111 -- # accel_perf -h 00:18:28.066 15:36:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:18:28.066 15:36:58 -- accel/accel.sh@12 -- # build_accel_config 00:18:28.066 15:36:58 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:28.066 15:36:58 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:28.066 15:36:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:28.066 15:36:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:28.066 15:36:58 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:28.066 15:36:58 -- accel/accel.sh@40 -- # local IFS=, 00:18:28.066 15:36:58 -- accel/accel.sh@41 -- # jq -r . 00:18:28.066 15:36:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:28.066 15:36:58 -- common/autotest_common.sh@10 -- # set +x 00:18:28.066 15:36:58 -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:18:28.066 15:36:58 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:18:28.066 15:36:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:28.066 15:36:58 -- common/autotest_common.sh@10 -- # set +x 00:18:28.066 ************************************ 00:18:28.066 START TEST accel_missing_filename 00:18:28.066 ************************************ 00:18:28.066 15:36:58 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress 00:18:28.066 15:36:58 -- common/autotest_common.sh@638 -- # local es=0 00:18:28.066 15:36:58 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress 00:18:28.066 15:36:58 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:18:28.066 15:36:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:28.066 15:36:58 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:18:28.066 15:36:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:28.066 15:36:58 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress 00:18:28.066 15:36:58 -- accel/accel.sh@12 -- # build_accel_config 00:18:28.066 15:36:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:18:28.066 15:36:58 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:28.066 15:36:58 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:28.066 15:36:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:28.066 15:36:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:28.066 15:36:58 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:28.066 15:36:58 -- accel/accel.sh@40 -- # local IFS=, 00:18:28.066 15:36:58 -- accel/accel.sh@41 -- # jq -r . 00:18:28.066 [2024-04-26 15:36:58.280306] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:18:28.066 [2024-04-26 15:36:58.280385] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63352 ] 00:18:28.324 [2024-04-26 15:36:58.417123] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:28.324 [2024-04-26 15:36:58.554305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:28.324 [2024-04-26 15:36:58.610862] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:28.596 [2024-04-26 15:36:58.688839] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:18:28.596 A filename is required. 00:18:28.596 15:36:58 -- common/autotest_common.sh@641 -- # es=234 00:18:28.596 15:36:58 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:28.596 15:36:58 -- common/autotest_common.sh@650 -- # es=106 00:18:28.596 15:36:58 -- common/autotest_common.sh@651 -- # case "$es" in 00:18:28.597 15:36:58 -- common/autotest_common.sh@658 -- # es=1 00:18:28.597 15:36:58 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:28.597 00:18:28.597 real 0m0.547s 00:18:28.597 user 0m0.375s 00:18:28.597 sys 0m0.115s 00:18:28.597 15:36:58 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:28.597 ************************************ 00:18:28.597 END TEST accel_missing_filename 00:18:28.597 ************************************ 00:18:28.597 15:36:58 -- common/autotest_common.sh@10 -- # set +x 00:18:28.597 15:36:58 -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:18:28.597 15:36:58 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:18:28.597 15:36:58 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:28.597 15:36:58 -- common/autotest_common.sh@10 -- # set +x 00:18:28.867 ************************************ 00:18:28.867 START TEST accel_compress_verify 00:18:28.867 ************************************ 00:18:28.867 15:36:58 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:18:28.867 15:36:58 -- common/autotest_common.sh@638 -- # local es=0 00:18:28.867 15:36:58 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:18:28.867 15:36:58 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:18:28.867 15:36:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:28.867 15:36:58 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:18:28.867 15:36:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:28.867 15:36:58 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:18:28.867 15:36:58 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:18:28.867 15:36:58 -- accel/accel.sh@12 -- # build_accel_config 00:18:28.868 15:36:58 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:28.868 15:36:58 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:28.868 15:36:58 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:28.868 15:36:58 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:28.868 15:36:58 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:28.868 15:36:58 -- accel/accel.sh@40 -- # local IFS=, 00:18:28.868 15:36:58 -- accel/accel.sh@41 -- # jq -r . 00:18:28.868 [2024-04-26 15:36:58.944086] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:18:28.868 [2024-04-26 15:36:58.944167] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63386 ] 00:18:28.868 [2024-04-26 15:36:59.079607] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.126 [2024-04-26 15:36:59.189919] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:29.126 [2024-04-26 15:36:59.248181] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:29.126 [2024-04-26 15:36:59.327768] accel_perf.c:1394:main: *ERROR*: ERROR starting application 00:18:29.384 00:18:29.384 Compression does not support the verify option, aborting. 00:18:29.384 15:36:59 -- common/autotest_common.sh@641 -- # es=161 00:18:29.384 15:36:59 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:29.384 15:36:59 -- common/autotest_common.sh@650 -- # es=33 00:18:29.384 ************************************ 00:18:29.384 END TEST accel_compress_verify 00:18:29.384 ************************************ 00:18:29.384 15:36:59 -- common/autotest_common.sh@651 -- # case "$es" in 00:18:29.384 15:36:59 -- common/autotest_common.sh@658 -- # es=1 00:18:29.384 15:36:59 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:29.384 00:18:29.384 real 0m0.522s 00:18:29.384 user 0m0.337s 00:18:29.384 sys 0m0.123s 00:18:29.384 15:36:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:29.384 15:36:59 -- common/autotest_common.sh@10 -- # set +x 00:18:29.384 15:36:59 -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:18:29.384 15:36:59 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:18:29.384 15:36:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:29.384 15:36:59 -- common/autotest_common.sh@10 -- # set +x 00:18:29.384 ************************************ 00:18:29.384 START TEST accel_wrong_workload 00:18:29.384 ************************************ 00:18:29.384 15:36:59 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w foobar 00:18:29.384 15:36:59 -- common/autotest_common.sh@638 -- # local es=0 00:18:29.384 15:36:59 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:18:29.384 15:36:59 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:18:29.384 15:36:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:29.384 15:36:59 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:18:29.384 15:36:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:29.384 15:36:59 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w foobar 00:18:29.384 15:36:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:18:29.384 15:36:59 -- accel/accel.sh@12 -- # build_accel_config 00:18:29.384 15:36:59 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:29.384 15:36:59 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:29.384 15:36:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:29.384 15:36:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:29.384 15:36:59 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:29.384 15:36:59 -- accel/accel.sh@40 -- # local IFS=, 00:18:29.384 15:36:59 -- accel/accel.sh@41 -- # jq -r . 00:18:29.384 Unsupported workload type: foobar 00:18:29.384 [2024-04-26 15:36:59.576439] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:18:29.384 accel_perf options: 00:18:29.384 [-h help message] 00:18:29.384 [-q queue depth per core] 00:18:29.384 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:18:29.384 [-T number of threads per core 00:18:29.384 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:18:29.384 [-t time in seconds] 00:18:29.384 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:18:29.384 [ dif_verify, , dif_generate, dif_generate_copy 00:18:29.384 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:18:29.384 [-l for compress/decompress workloads, name of uncompressed input file 00:18:29.384 [-S for crc32c workload, use this seed value (default 0) 00:18:29.384 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:18:29.384 [-f for fill workload, use this BYTE value (default 255) 00:18:29.384 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:18:29.384 [-y verify result if this switch is on] 00:18:29.384 [-a tasks to allocate per core (default: same value as -q)] 00:18:29.384 Can be used to spread operations across a wider range of memory. 00:18:29.384 15:36:59 -- common/autotest_common.sh@641 -- # es=1 00:18:29.384 15:36:59 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:29.384 15:36:59 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:29.384 15:36:59 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:29.384 00:18:29.384 real 0m0.034s 00:18:29.384 user 0m0.019s 00:18:29.384 sys 0m0.012s 00:18:29.384 15:36:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:29.384 15:36:59 -- common/autotest_common.sh@10 -- # set +x 00:18:29.384 ************************************ 00:18:29.384 END TEST accel_wrong_workload 00:18:29.384 ************************************ 00:18:29.384 15:36:59 -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:18:29.384 15:36:59 -- common/autotest_common.sh@1087 -- # '[' 10 -le 1 ']' 00:18:29.384 15:36:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:29.385 15:36:59 -- common/autotest_common.sh@10 -- # set +x 00:18:29.642 ************************************ 00:18:29.643 START TEST accel_negative_buffers 00:18:29.643 ************************************ 00:18:29.643 15:36:59 -- common/autotest_common.sh@1111 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:18:29.643 15:36:59 -- common/autotest_common.sh@638 -- # local es=0 00:18:29.643 15:36:59 -- common/autotest_common.sh@640 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:18:29.643 15:36:59 -- common/autotest_common.sh@626 -- # local arg=accel_perf 00:18:29.643 15:36:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:29.643 15:36:59 -- common/autotest_common.sh@630 -- # type -t accel_perf 00:18:29.643 15:36:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:18:29.643 15:36:59 -- common/autotest_common.sh@641 -- # accel_perf -t 1 -w xor -y -x -1 00:18:29.643 15:36:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:18:29.643 15:36:59 -- accel/accel.sh@12 -- # build_accel_config 00:18:29.643 15:36:59 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:29.643 15:36:59 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:29.643 15:36:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:29.643 15:36:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:29.643 15:36:59 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:29.643 15:36:59 -- accel/accel.sh@40 -- # local IFS=, 00:18:29.643 15:36:59 -- accel/accel.sh@41 -- # jq -r . 00:18:29.643 -x option must be non-negative. 00:18:29.643 [2024-04-26 15:36:59.729379] app.c:1364:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:18:29.643 accel_perf options: 00:18:29.643 [-h help message] 00:18:29.643 [-q queue depth per core] 00:18:29.643 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:18:29.643 [-T number of threads per core 00:18:29.643 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:18:29.643 [-t time in seconds] 00:18:29.643 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:18:29.643 [ dif_verify, , dif_generate, dif_generate_copy 00:18:29.643 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:18:29.643 [-l for compress/decompress workloads, name of uncompressed input file 00:18:29.643 [-S for crc32c workload, use this seed value (default 0) 00:18:29.643 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:18:29.643 [-f for fill workload, use this BYTE value (default 255) 00:18:29.643 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:18:29.643 [-y verify result if this switch is on] 00:18:29.643 [-a tasks to allocate per core (default: same value as -q)] 00:18:29.643 Can be used to spread operations across a wider range of memory. 00:18:29.643 15:36:59 -- common/autotest_common.sh@641 -- # es=1 00:18:29.643 15:36:59 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:18:29.643 15:36:59 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:18:29.643 15:36:59 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:18:29.643 00:18:29.643 real 0m0.033s 00:18:29.643 user 0m0.022s 00:18:29.643 sys 0m0.010s 00:18:29.643 ************************************ 00:18:29.643 END TEST accel_negative_buffers 00:18:29.643 ************************************ 00:18:29.643 15:36:59 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:29.643 15:36:59 -- common/autotest_common.sh@10 -- # set +x 00:18:29.643 15:36:59 -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:18:29.643 15:36:59 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:18:29.643 15:36:59 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:29.643 15:36:59 -- common/autotest_common.sh@10 -- # set +x 00:18:29.643 ************************************ 00:18:29.643 START TEST accel_crc32c 00:18:29.643 ************************************ 00:18:29.643 15:36:59 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -S 32 -y 00:18:29.643 15:36:59 -- accel/accel.sh@16 -- # local accel_opc 00:18:29.643 15:36:59 -- accel/accel.sh@17 -- # local accel_module 00:18:29.643 15:36:59 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:18:29.643 15:36:59 -- accel/accel.sh@19 -- # IFS=: 00:18:29.643 15:36:59 -- accel/accel.sh@19 -- # read -r var val 00:18:29.643 15:36:59 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:18:29.643 15:36:59 -- accel/accel.sh@12 -- # build_accel_config 00:18:29.643 15:36:59 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:29.643 15:36:59 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:29.643 15:36:59 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:29.643 15:36:59 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:29.643 15:36:59 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:29.643 15:36:59 -- accel/accel.sh@40 -- # local IFS=, 00:18:29.643 15:36:59 -- accel/accel.sh@41 -- # jq -r . 00:18:29.643 [2024-04-26 15:36:59.872520] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:18:29.643 [2024-04-26 15:36:59.872605] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63464 ] 00:18:29.900 [2024-04-26 15:37:00.009206] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.900 [2024-04-26 15:37:00.136686] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:30.158 15:37:00 -- accel/accel.sh@20 -- # val= 00:18:30.158 15:37:00 -- accel/accel.sh@21 -- # case "$var" in 00:18:30.158 15:37:00 -- accel/accel.sh@19 -- # IFS=: 00:18:30.158 15:37:00 -- accel/accel.sh@19 -- # read -r var val 00:18:30.158 15:37:00 -- accel/accel.sh@20 -- # val= 00:18:30.159 15:37:00 -- accel/accel.sh@21 -- # case "$var" in 00:18:30.159 15:37:00 -- accel/accel.sh@19 -- # IFS=: 00:18:30.159 15:37:00 -- accel/accel.sh@19 -- # read -r var val 00:18:30.159 15:37:00 -- accel/accel.sh@20 -- # val=0x1 00:18:30.159 15:37:00 -- accel/accel.sh@21 -- # case "$var" in 00:18:30.159 15:37:00 -- accel/accel.sh@19 -- # IFS=: 00:18:30.159 15:37:00 -- accel/accel.sh@19 -- # read -r var val 00:18:30.159 15:37:00 -- accel/accel.sh@20 -- # val= 00:18:30.159 15:37:00 -- accel/accel.sh@21 -- # case "$var" in 00:18:30.159 15:37:00 -- accel/accel.sh@19 -- # IFS=: 00:18:30.159 15:37:00 -- accel/accel.sh@19 -- # read -r var val 00:18:30.159 15:37:00 -- accel/accel.sh@20 -- # val= 00:18:30.159 15:37:00 -- accel/accel.sh@21 -- # case "$var" in 00:18:30.159 15:37:00 -- accel/accel.sh@19 -- # IFS=: 00:18:30.159 15:37:00 -- accel/accel.sh@19 -- # read -r var val 00:18:30.159 15:37:00 -- accel/accel.sh@20 -- # val=crc32c 00:18:30.159 15:37:00 -- accel/accel.sh@21 -- # case "$var" in 00:18:30.159 15:37:00 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:18:30.159 15:37:00 -- accel/accel.sh@19 -- # IFS=: 00:18:30.159 15:37:00 -- accel/accel.sh@19 -- # read -r var val 00:18:30.159 15:37:00 -- accel/accel.sh@20 -- # val=32 00:18:30.159 15:37:00 -- accel/accel.sh@21 -- # case "$var" in 00:18:30.159 15:37:00 -- accel/accel.sh@19 -- # IFS=: 00:18:30.159 15:37:00 -- accel/accel.sh@19 -- # read -r var val 00:18:30.159 15:37:00 -- accel/accel.sh@20 -- # val='4096 bytes' 00:18:30.159 15:37:00 -- accel/accel.sh@21 -- # case "$var" in 00:18:30.159 15:37:00 -- accel/accel.sh@19 -- # IFS=: 00:18:30.159 15:37:00 -- accel/accel.sh@19 -- # read -r var val 00:18:30.159 15:37:00 -- accel/accel.sh@20 -- # val= 00:18:30.159 15:37:00 -- accel/accel.sh@21 -- # case "$var" in 00:18:30.159 15:37:00 -- accel/accel.sh@19 -- # IFS=: 00:18:30.159 15:37:00 -- accel/accel.sh@19 -- # read -r var val 00:18:30.159 15:37:00 -- accel/accel.sh@20 -- # val=software 00:18:30.159 15:37:00 -- accel/accel.sh@21 -- # case "$var" in 00:18:30.159 15:37:00 -- accel/accel.sh@22 -- # accel_module=software 00:18:30.159 15:37:00 -- accel/accel.sh@19 -- # IFS=: 00:18:30.159 15:37:00 -- accel/accel.sh@19 -- # read -r var val 00:18:30.159 15:37:00 -- accel/accel.sh@20 -- # val=32 00:18:30.159 15:37:00 -- accel/accel.sh@21 -- # case "$var" in 00:18:30.159 15:37:00 -- accel/accel.sh@19 -- # IFS=: 00:18:30.159 15:37:00 -- accel/accel.sh@19 -- # read -r var val 00:18:30.159 15:37:00 -- accel/accel.sh@20 -- # val=32 00:18:30.159 15:37:00 -- accel/accel.sh@21 -- # case "$var" in 00:18:30.159 15:37:00 -- accel/accel.sh@19 -- # IFS=: 00:18:30.159 15:37:00 -- accel/accel.sh@19 -- # read -r var val 00:18:30.159 15:37:00 -- accel/accel.sh@20 -- # val=1 00:18:30.159 15:37:00 -- accel/accel.sh@21 -- # case "$var" in 00:18:30.159 15:37:00 -- accel/accel.sh@19 -- # IFS=: 00:18:30.159 15:37:00 -- accel/accel.sh@19 -- # read -r var val 00:18:30.159 15:37:00 -- accel/accel.sh@20 -- # val='1 seconds' 00:18:30.159 15:37:00 -- accel/accel.sh@21 -- # case "$var" in 00:18:30.159 15:37:00 -- accel/accel.sh@19 -- # IFS=: 00:18:30.159 15:37:00 -- accel/accel.sh@19 -- # read -r var val 00:18:30.159 15:37:00 -- accel/accel.sh@20 -- # val=Yes 00:18:30.159 15:37:00 -- accel/accel.sh@21 -- # case "$var" in 00:18:30.159 15:37:00 -- accel/accel.sh@19 -- # IFS=: 00:18:30.159 15:37:00 -- accel/accel.sh@19 -- # read -r var val 00:18:30.159 15:37:00 -- accel/accel.sh@20 -- # val= 00:18:30.159 15:37:00 -- accel/accel.sh@21 -- # case "$var" in 00:18:30.159 15:37:00 -- accel/accel.sh@19 -- # IFS=: 00:18:30.159 15:37:00 -- accel/accel.sh@19 -- # read -r var val 00:18:30.159 15:37:00 -- accel/accel.sh@20 -- # val= 00:18:30.159 15:37:00 -- accel/accel.sh@21 -- # case "$var" in 00:18:30.159 15:37:00 -- accel/accel.sh@19 -- # IFS=: 00:18:30.159 15:37:00 -- accel/accel.sh@19 -- # read -r var val 00:18:31.532 15:37:01 -- accel/accel.sh@20 -- # val= 00:18:31.532 15:37:01 -- accel/accel.sh@21 -- # case "$var" in 00:18:31.532 15:37:01 -- accel/accel.sh@19 -- # IFS=: 00:18:31.532 15:37:01 -- accel/accel.sh@19 -- # read -r var val 00:18:31.532 15:37:01 -- accel/accel.sh@20 -- # val= 00:18:31.532 15:37:01 -- accel/accel.sh@21 -- # case "$var" in 00:18:31.532 15:37:01 -- accel/accel.sh@19 -- # IFS=: 00:18:31.532 15:37:01 -- accel/accel.sh@19 -- # read -r var val 00:18:31.532 15:37:01 -- accel/accel.sh@20 -- # val= 00:18:31.532 15:37:01 -- accel/accel.sh@21 -- # case "$var" in 00:18:31.532 15:37:01 -- accel/accel.sh@19 -- # IFS=: 00:18:31.532 15:37:01 -- accel/accel.sh@19 -- # read -r var val 00:18:31.532 15:37:01 -- accel/accel.sh@20 -- # val= 00:18:31.532 15:37:01 -- accel/accel.sh@21 -- # case "$var" in 00:18:31.532 15:37:01 -- accel/accel.sh@19 -- # IFS=: 00:18:31.532 15:37:01 -- accel/accel.sh@19 -- # read -r var val 00:18:31.532 15:37:01 -- accel/accel.sh@20 -- # val= 00:18:31.532 15:37:01 -- accel/accel.sh@21 -- # case "$var" in 00:18:31.532 15:37:01 -- accel/accel.sh@19 -- # IFS=: 00:18:31.532 15:37:01 -- accel/accel.sh@19 -- # read -r var val 00:18:31.532 15:37:01 -- accel/accel.sh@20 -- # val= 00:18:31.532 15:37:01 -- accel/accel.sh@21 -- # case "$var" in 00:18:31.532 15:37:01 -- accel/accel.sh@19 -- # IFS=: 00:18:31.532 15:37:01 -- accel/accel.sh@19 -- # read -r var val 00:18:31.532 15:37:01 -- accel/accel.sh@27 -- # [[ -n software ]] 00:18:31.532 15:37:01 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:18:31.532 15:37:01 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:31.532 00:18:31.532 real 0m1.549s 00:18:31.532 user 0m1.338s 00:18:31.532 sys 0m0.115s 00:18:31.532 15:37:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:31.532 15:37:01 -- common/autotest_common.sh@10 -- # set +x 00:18:31.532 ************************************ 00:18:31.532 END TEST accel_crc32c 00:18:31.532 ************************************ 00:18:31.533 15:37:01 -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:18:31.533 15:37:01 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:18:31.533 15:37:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:31.533 15:37:01 -- common/autotest_common.sh@10 -- # set +x 00:18:31.533 ************************************ 00:18:31.533 START TEST accel_crc32c_C2 00:18:31.533 ************************************ 00:18:31.533 15:37:01 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w crc32c -y -C 2 00:18:31.533 15:37:01 -- accel/accel.sh@16 -- # local accel_opc 00:18:31.533 15:37:01 -- accel/accel.sh@17 -- # local accel_module 00:18:31.533 15:37:01 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:18:31.533 15:37:01 -- accel/accel.sh@19 -- # IFS=: 00:18:31.533 15:37:01 -- accel/accel.sh@19 -- # read -r var val 00:18:31.533 15:37:01 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:18:31.533 15:37:01 -- accel/accel.sh@12 -- # build_accel_config 00:18:31.533 15:37:01 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:31.533 15:37:01 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:31.533 15:37:01 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:31.533 15:37:01 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:31.533 15:37:01 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:31.533 15:37:01 -- accel/accel.sh@40 -- # local IFS=, 00:18:31.533 15:37:01 -- accel/accel.sh@41 -- # jq -r . 00:18:31.533 [2024-04-26 15:37:01.540024] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:18:31.533 [2024-04-26 15:37:01.540117] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63497 ] 00:18:31.533 [2024-04-26 15:37:01.679178] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.533 [2024-04-26 15:37:01.811048] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:31.792 15:37:01 -- accel/accel.sh@20 -- # val= 00:18:31.792 15:37:01 -- accel/accel.sh@21 -- # case "$var" in 00:18:31.792 15:37:01 -- accel/accel.sh@19 -- # IFS=: 00:18:31.792 15:37:01 -- accel/accel.sh@19 -- # read -r var val 00:18:31.792 15:37:01 -- accel/accel.sh@20 -- # val= 00:18:31.792 15:37:01 -- accel/accel.sh@21 -- # case "$var" in 00:18:31.792 15:37:01 -- accel/accel.sh@19 -- # IFS=: 00:18:31.792 15:37:01 -- accel/accel.sh@19 -- # read -r var val 00:18:31.792 15:37:01 -- accel/accel.sh@20 -- # val=0x1 00:18:31.792 15:37:01 -- accel/accel.sh@21 -- # case "$var" in 00:18:31.792 15:37:01 -- accel/accel.sh@19 -- # IFS=: 00:18:31.792 15:37:01 -- accel/accel.sh@19 -- # read -r var val 00:18:31.792 15:37:01 -- accel/accel.sh@20 -- # val= 00:18:31.792 15:37:01 -- accel/accel.sh@21 -- # case "$var" in 00:18:31.792 15:37:01 -- accel/accel.sh@19 -- # IFS=: 00:18:31.792 15:37:01 -- accel/accel.sh@19 -- # read -r var val 00:18:31.792 15:37:01 -- accel/accel.sh@20 -- # val= 00:18:31.792 15:37:01 -- accel/accel.sh@21 -- # case "$var" in 00:18:31.792 15:37:01 -- accel/accel.sh@19 -- # IFS=: 00:18:31.792 15:37:01 -- accel/accel.sh@19 -- # read -r var val 00:18:31.792 15:37:01 -- accel/accel.sh@20 -- # val=crc32c 00:18:31.792 15:37:01 -- accel/accel.sh@21 -- # case "$var" in 00:18:31.792 15:37:01 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:18:31.792 15:37:01 -- accel/accel.sh@19 -- # IFS=: 00:18:31.792 15:37:01 -- accel/accel.sh@19 -- # read -r var val 00:18:31.792 15:37:01 -- accel/accel.sh@20 -- # val=0 00:18:31.792 15:37:01 -- accel/accel.sh@21 -- # case "$var" in 00:18:31.792 15:37:01 -- accel/accel.sh@19 -- # IFS=: 00:18:31.792 15:37:01 -- accel/accel.sh@19 -- # read -r var val 00:18:31.792 15:37:01 -- accel/accel.sh@20 -- # val='4096 bytes' 00:18:31.792 15:37:01 -- accel/accel.sh@21 -- # case "$var" in 00:18:31.792 15:37:01 -- accel/accel.sh@19 -- # IFS=: 00:18:31.792 15:37:01 -- accel/accel.sh@19 -- # read -r var val 00:18:31.792 15:37:01 -- accel/accel.sh@20 -- # val= 00:18:31.792 15:37:01 -- accel/accel.sh@21 -- # case "$var" in 00:18:31.792 15:37:01 -- accel/accel.sh@19 -- # IFS=: 00:18:31.792 15:37:01 -- accel/accel.sh@19 -- # read -r var val 00:18:31.792 15:37:01 -- accel/accel.sh@20 -- # val=software 00:18:31.792 15:37:01 -- accel/accel.sh@21 -- # case "$var" in 00:18:31.792 15:37:01 -- accel/accel.sh@22 -- # accel_module=software 00:18:31.792 15:37:01 -- accel/accel.sh@19 -- # IFS=: 00:18:31.792 15:37:01 -- accel/accel.sh@19 -- # read -r var val 00:18:31.792 15:37:01 -- accel/accel.sh@20 -- # val=32 00:18:31.792 15:37:01 -- accel/accel.sh@21 -- # case "$var" in 00:18:31.792 15:37:01 -- accel/accel.sh@19 -- # IFS=: 00:18:31.792 15:37:01 -- accel/accel.sh@19 -- # read -r var val 00:18:31.792 15:37:01 -- accel/accel.sh@20 -- # val=32 00:18:31.792 15:37:01 -- accel/accel.sh@21 -- # case "$var" in 00:18:31.792 15:37:01 -- accel/accel.sh@19 -- # IFS=: 00:18:31.792 15:37:01 -- accel/accel.sh@19 -- # read -r var val 00:18:31.792 15:37:01 -- accel/accel.sh@20 -- # val=1 00:18:31.792 15:37:01 -- accel/accel.sh@21 -- # case "$var" in 00:18:31.792 15:37:01 -- accel/accel.sh@19 -- # IFS=: 00:18:31.792 15:37:01 -- accel/accel.sh@19 -- # read -r var val 00:18:31.792 15:37:01 -- accel/accel.sh@20 -- # val='1 seconds' 00:18:31.792 15:37:01 -- accel/accel.sh@21 -- # case "$var" in 00:18:31.792 15:37:01 -- accel/accel.sh@19 -- # IFS=: 00:18:31.792 15:37:01 -- accel/accel.sh@19 -- # read -r var val 00:18:31.792 15:37:01 -- accel/accel.sh@20 -- # val=Yes 00:18:31.792 15:37:01 -- accel/accel.sh@21 -- # case "$var" in 00:18:31.792 15:37:01 -- accel/accel.sh@19 -- # IFS=: 00:18:31.792 15:37:01 -- accel/accel.sh@19 -- # read -r var val 00:18:31.792 15:37:01 -- accel/accel.sh@20 -- # val= 00:18:31.792 15:37:01 -- accel/accel.sh@21 -- # case "$var" in 00:18:31.792 15:37:01 -- accel/accel.sh@19 -- # IFS=: 00:18:31.792 15:37:01 -- accel/accel.sh@19 -- # read -r var val 00:18:31.792 15:37:01 -- accel/accel.sh@20 -- # val= 00:18:31.792 15:37:01 -- accel/accel.sh@21 -- # case "$var" in 00:18:31.792 15:37:01 -- accel/accel.sh@19 -- # IFS=: 00:18:31.792 15:37:01 -- accel/accel.sh@19 -- # read -r var val 00:18:33.166 15:37:03 -- accel/accel.sh@20 -- # val= 00:18:33.166 15:37:03 -- accel/accel.sh@21 -- # case "$var" in 00:18:33.166 15:37:03 -- accel/accel.sh@19 -- # IFS=: 00:18:33.166 15:37:03 -- accel/accel.sh@19 -- # read -r var val 00:18:33.166 15:37:03 -- accel/accel.sh@20 -- # val= 00:18:33.166 15:37:03 -- accel/accel.sh@21 -- # case "$var" in 00:18:33.166 15:37:03 -- accel/accel.sh@19 -- # IFS=: 00:18:33.166 15:37:03 -- accel/accel.sh@19 -- # read -r var val 00:18:33.166 15:37:03 -- accel/accel.sh@20 -- # val= 00:18:33.166 15:37:03 -- accel/accel.sh@21 -- # case "$var" in 00:18:33.166 15:37:03 -- accel/accel.sh@19 -- # IFS=: 00:18:33.166 15:37:03 -- accel/accel.sh@19 -- # read -r var val 00:18:33.166 15:37:03 -- accel/accel.sh@20 -- # val= 00:18:33.166 15:37:03 -- accel/accel.sh@21 -- # case "$var" in 00:18:33.166 15:37:03 -- accel/accel.sh@19 -- # IFS=: 00:18:33.166 15:37:03 -- accel/accel.sh@19 -- # read -r var val 00:18:33.166 15:37:03 -- accel/accel.sh@20 -- # val= 00:18:33.166 15:37:03 -- accel/accel.sh@21 -- # case "$var" in 00:18:33.166 15:37:03 -- accel/accel.sh@19 -- # IFS=: 00:18:33.166 15:37:03 -- accel/accel.sh@19 -- # read -r var val 00:18:33.166 15:37:03 -- accel/accel.sh@20 -- # val= 00:18:33.166 15:37:03 -- accel/accel.sh@21 -- # case "$var" in 00:18:33.166 15:37:03 -- accel/accel.sh@19 -- # IFS=: 00:18:33.166 15:37:03 -- accel/accel.sh@19 -- # read -r var val 00:18:33.166 15:37:03 -- accel/accel.sh@27 -- # [[ -n software ]] 00:18:33.166 15:37:03 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:18:33.166 15:37:03 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:33.166 00:18:33.166 real 0m1.547s 00:18:33.166 user 0m1.326s 00:18:33.166 sys 0m0.127s 00:18:33.166 15:37:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:33.166 ************************************ 00:18:33.166 END TEST accel_crc32c_C2 00:18:33.166 ************************************ 00:18:33.166 15:37:03 -- common/autotest_common.sh@10 -- # set +x 00:18:33.166 15:37:03 -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:18:33.166 15:37:03 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:18:33.166 15:37:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:33.166 15:37:03 -- common/autotest_common.sh@10 -- # set +x 00:18:33.166 ************************************ 00:18:33.166 START TEST accel_copy 00:18:33.166 ************************************ 00:18:33.166 15:37:03 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy -y 00:18:33.166 15:37:03 -- accel/accel.sh@16 -- # local accel_opc 00:18:33.166 15:37:03 -- accel/accel.sh@17 -- # local accel_module 00:18:33.166 15:37:03 -- accel/accel.sh@19 -- # IFS=: 00:18:33.166 15:37:03 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:18:33.166 15:37:03 -- accel/accel.sh@19 -- # read -r var val 00:18:33.166 15:37:03 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:18:33.166 15:37:03 -- accel/accel.sh@12 -- # build_accel_config 00:18:33.166 15:37:03 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:33.166 15:37:03 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:33.166 15:37:03 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:33.166 15:37:03 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:33.166 15:37:03 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:33.166 15:37:03 -- accel/accel.sh@40 -- # local IFS=, 00:18:33.166 15:37:03 -- accel/accel.sh@41 -- # jq -r . 00:18:33.166 [2024-04-26 15:37:03.206345] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:18:33.166 [2024-04-26 15:37:03.206424] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63541 ] 00:18:33.166 [2024-04-26 15:37:03.346340] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:33.423 [2024-04-26 15:37:03.467828] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:33.423 15:37:03 -- accel/accel.sh@20 -- # val= 00:18:33.423 15:37:03 -- accel/accel.sh@21 -- # case "$var" in 00:18:33.423 15:37:03 -- accel/accel.sh@19 -- # IFS=: 00:18:33.423 15:37:03 -- accel/accel.sh@19 -- # read -r var val 00:18:33.423 15:37:03 -- accel/accel.sh@20 -- # val= 00:18:33.423 15:37:03 -- accel/accel.sh@21 -- # case "$var" in 00:18:33.423 15:37:03 -- accel/accel.sh@19 -- # IFS=: 00:18:33.423 15:37:03 -- accel/accel.sh@19 -- # read -r var val 00:18:33.423 15:37:03 -- accel/accel.sh@20 -- # val=0x1 00:18:33.423 15:37:03 -- accel/accel.sh@21 -- # case "$var" in 00:18:33.423 15:37:03 -- accel/accel.sh@19 -- # IFS=: 00:18:33.423 15:37:03 -- accel/accel.sh@19 -- # read -r var val 00:18:33.423 15:37:03 -- accel/accel.sh@20 -- # val= 00:18:33.423 15:37:03 -- accel/accel.sh@21 -- # case "$var" in 00:18:33.423 15:37:03 -- accel/accel.sh@19 -- # IFS=: 00:18:33.423 15:37:03 -- accel/accel.sh@19 -- # read -r var val 00:18:33.423 15:37:03 -- accel/accel.sh@20 -- # val= 00:18:33.423 15:37:03 -- accel/accel.sh@21 -- # case "$var" in 00:18:33.423 15:37:03 -- accel/accel.sh@19 -- # IFS=: 00:18:33.423 15:37:03 -- accel/accel.sh@19 -- # read -r var val 00:18:33.423 15:37:03 -- accel/accel.sh@20 -- # val=copy 00:18:33.423 15:37:03 -- accel/accel.sh@21 -- # case "$var" in 00:18:33.423 15:37:03 -- accel/accel.sh@23 -- # accel_opc=copy 00:18:33.423 15:37:03 -- accel/accel.sh@19 -- # IFS=: 00:18:33.423 15:37:03 -- accel/accel.sh@19 -- # read -r var val 00:18:33.423 15:37:03 -- accel/accel.sh@20 -- # val='4096 bytes' 00:18:33.423 15:37:03 -- accel/accel.sh@21 -- # case "$var" in 00:18:33.423 15:37:03 -- accel/accel.sh@19 -- # IFS=: 00:18:33.423 15:37:03 -- accel/accel.sh@19 -- # read -r var val 00:18:33.423 15:37:03 -- accel/accel.sh@20 -- # val= 00:18:33.423 15:37:03 -- accel/accel.sh@21 -- # case "$var" in 00:18:33.423 15:37:03 -- accel/accel.sh@19 -- # IFS=: 00:18:33.423 15:37:03 -- accel/accel.sh@19 -- # read -r var val 00:18:33.423 15:37:03 -- accel/accel.sh@20 -- # val=software 00:18:33.423 15:37:03 -- accel/accel.sh@21 -- # case "$var" in 00:18:33.423 15:37:03 -- accel/accel.sh@22 -- # accel_module=software 00:18:33.423 15:37:03 -- accel/accel.sh@19 -- # IFS=: 00:18:33.423 15:37:03 -- accel/accel.sh@19 -- # read -r var val 00:18:33.423 15:37:03 -- accel/accel.sh@20 -- # val=32 00:18:33.423 15:37:03 -- accel/accel.sh@21 -- # case "$var" in 00:18:33.423 15:37:03 -- accel/accel.sh@19 -- # IFS=: 00:18:33.423 15:37:03 -- accel/accel.sh@19 -- # read -r var val 00:18:33.423 15:37:03 -- accel/accel.sh@20 -- # val=32 00:18:33.423 15:37:03 -- accel/accel.sh@21 -- # case "$var" in 00:18:33.423 15:37:03 -- accel/accel.sh@19 -- # IFS=: 00:18:33.423 15:37:03 -- accel/accel.sh@19 -- # read -r var val 00:18:33.423 15:37:03 -- accel/accel.sh@20 -- # val=1 00:18:33.423 15:37:03 -- accel/accel.sh@21 -- # case "$var" in 00:18:33.423 15:37:03 -- accel/accel.sh@19 -- # IFS=: 00:18:33.423 15:37:03 -- accel/accel.sh@19 -- # read -r var val 00:18:33.423 15:37:03 -- accel/accel.sh@20 -- # val='1 seconds' 00:18:33.423 15:37:03 -- accel/accel.sh@21 -- # case "$var" in 00:18:33.423 15:37:03 -- accel/accel.sh@19 -- # IFS=: 00:18:33.423 15:37:03 -- accel/accel.sh@19 -- # read -r var val 00:18:33.424 15:37:03 -- accel/accel.sh@20 -- # val=Yes 00:18:33.424 15:37:03 -- accel/accel.sh@21 -- # case "$var" in 00:18:33.424 15:37:03 -- accel/accel.sh@19 -- # IFS=: 00:18:33.424 15:37:03 -- accel/accel.sh@19 -- # read -r var val 00:18:33.424 15:37:03 -- accel/accel.sh@20 -- # val= 00:18:33.424 15:37:03 -- accel/accel.sh@21 -- # case "$var" in 00:18:33.424 15:37:03 -- accel/accel.sh@19 -- # IFS=: 00:18:33.424 15:37:03 -- accel/accel.sh@19 -- # read -r var val 00:18:33.424 15:37:03 -- accel/accel.sh@20 -- # val= 00:18:33.424 15:37:03 -- accel/accel.sh@21 -- # case "$var" in 00:18:33.424 15:37:03 -- accel/accel.sh@19 -- # IFS=: 00:18:33.424 15:37:03 -- accel/accel.sh@19 -- # read -r var val 00:18:34.797 15:37:04 -- accel/accel.sh@20 -- # val= 00:18:34.797 15:37:04 -- accel/accel.sh@21 -- # case "$var" in 00:18:34.797 15:37:04 -- accel/accel.sh@19 -- # IFS=: 00:18:34.797 15:37:04 -- accel/accel.sh@19 -- # read -r var val 00:18:34.797 15:37:04 -- accel/accel.sh@20 -- # val= 00:18:34.797 15:37:04 -- accel/accel.sh@21 -- # case "$var" in 00:18:34.797 15:37:04 -- accel/accel.sh@19 -- # IFS=: 00:18:34.797 15:37:04 -- accel/accel.sh@19 -- # read -r var val 00:18:34.797 15:37:04 -- accel/accel.sh@20 -- # val= 00:18:34.797 15:37:04 -- accel/accel.sh@21 -- # case "$var" in 00:18:34.797 15:37:04 -- accel/accel.sh@19 -- # IFS=: 00:18:34.797 15:37:04 -- accel/accel.sh@19 -- # read -r var val 00:18:34.797 15:37:04 -- accel/accel.sh@20 -- # val= 00:18:34.797 15:37:04 -- accel/accel.sh@21 -- # case "$var" in 00:18:34.797 15:37:04 -- accel/accel.sh@19 -- # IFS=: 00:18:34.797 15:37:04 -- accel/accel.sh@19 -- # read -r var val 00:18:34.797 15:37:04 -- accel/accel.sh@20 -- # val= 00:18:34.797 15:37:04 -- accel/accel.sh@21 -- # case "$var" in 00:18:34.797 15:37:04 -- accel/accel.sh@19 -- # IFS=: 00:18:34.797 15:37:04 -- accel/accel.sh@19 -- # read -r var val 00:18:34.797 15:37:04 -- accel/accel.sh@20 -- # val= 00:18:34.797 15:37:04 -- accel/accel.sh@21 -- # case "$var" in 00:18:34.797 15:37:04 -- accel/accel.sh@19 -- # IFS=: 00:18:34.797 15:37:04 -- accel/accel.sh@19 -- # read -r var val 00:18:34.797 15:37:04 -- accel/accel.sh@27 -- # [[ -n software ]] 00:18:34.797 15:37:04 -- accel/accel.sh@27 -- # [[ -n copy ]] 00:18:34.797 15:37:04 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:34.797 00:18:34.797 real 0m1.549s 00:18:34.797 user 0m1.325s 00:18:34.797 sys 0m0.129s 00:18:34.797 15:37:04 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:34.797 ************************************ 00:18:34.797 END TEST accel_copy 00:18:34.797 ************************************ 00:18:34.797 15:37:04 -- common/autotest_common.sh@10 -- # set +x 00:18:34.797 15:37:04 -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:18:34.797 15:37:04 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:18:34.797 15:37:04 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:34.797 15:37:04 -- common/autotest_common.sh@10 -- # set +x 00:18:34.797 ************************************ 00:18:34.797 START TEST accel_fill 00:18:34.797 ************************************ 00:18:34.797 15:37:04 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:18:34.797 15:37:04 -- accel/accel.sh@16 -- # local accel_opc 00:18:34.797 15:37:04 -- accel/accel.sh@17 -- # local accel_module 00:18:34.797 15:37:04 -- accel/accel.sh@19 -- # IFS=: 00:18:34.797 15:37:04 -- accel/accel.sh@19 -- # read -r var val 00:18:34.797 15:37:04 -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:18:34.797 15:37:04 -- accel/accel.sh@12 -- # build_accel_config 00:18:34.797 15:37:04 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:18:34.797 15:37:04 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:34.797 15:37:04 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:34.797 15:37:04 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:34.797 15:37:04 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:34.797 15:37:04 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:34.797 15:37:04 -- accel/accel.sh@40 -- # local IFS=, 00:18:34.797 15:37:04 -- accel/accel.sh@41 -- # jq -r . 00:18:34.797 [2024-04-26 15:37:04.859495] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:18:34.797 [2024-04-26 15:37:04.859584] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63586 ] 00:18:34.797 [2024-04-26 15:37:05.000915] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.056 [2024-04-26 15:37:05.129580] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:35.056 15:37:05 -- accel/accel.sh@20 -- # val= 00:18:35.056 15:37:05 -- accel/accel.sh@21 -- # case "$var" in 00:18:35.056 15:37:05 -- accel/accel.sh@19 -- # IFS=: 00:18:35.056 15:37:05 -- accel/accel.sh@19 -- # read -r var val 00:18:35.056 15:37:05 -- accel/accel.sh@20 -- # val= 00:18:35.056 15:37:05 -- accel/accel.sh@21 -- # case "$var" in 00:18:35.056 15:37:05 -- accel/accel.sh@19 -- # IFS=: 00:18:35.056 15:37:05 -- accel/accel.sh@19 -- # read -r var val 00:18:35.056 15:37:05 -- accel/accel.sh@20 -- # val=0x1 00:18:35.056 15:37:05 -- accel/accel.sh@21 -- # case "$var" in 00:18:35.056 15:37:05 -- accel/accel.sh@19 -- # IFS=: 00:18:35.056 15:37:05 -- accel/accel.sh@19 -- # read -r var val 00:18:35.056 15:37:05 -- accel/accel.sh@20 -- # val= 00:18:35.056 15:37:05 -- accel/accel.sh@21 -- # case "$var" in 00:18:35.056 15:37:05 -- accel/accel.sh@19 -- # IFS=: 00:18:35.056 15:37:05 -- accel/accel.sh@19 -- # read -r var val 00:18:35.056 15:37:05 -- accel/accel.sh@20 -- # val= 00:18:35.056 15:37:05 -- accel/accel.sh@21 -- # case "$var" in 00:18:35.056 15:37:05 -- accel/accel.sh@19 -- # IFS=: 00:18:35.056 15:37:05 -- accel/accel.sh@19 -- # read -r var val 00:18:35.056 15:37:05 -- accel/accel.sh@20 -- # val=fill 00:18:35.056 15:37:05 -- accel/accel.sh@21 -- # case "$var" in 00:18:35.056 15:37:05 -- accel/accel.sh@23 -- # accel_opc=fill 00:18:35.056 15:37:05 -- accel/accel.sh@19 -- # IFS=: 00:18:35.056 15:37:05 -- accel/accel.sh@19 -- # read -r var val 00:18:35.056 15:37:05 -- accel/accel.sh@20 -- # val=0x80 00:18:35.056 15:37:05 -- accel/accel.sh@21 -- # case "$var" in 00:18:35.056 15:37:05 -- accel/accel.sh@19 -- # IFS=: 00:18:35.056 15:37:05 -- accel/accel.sh@19 -- # read -r var val 00:18:35.056 15:37:05 -- accel/accel.sh@20 -- # val='4096 bytes' 00:18:35.056 15:37:05 -- accel/accel.sh@21 -- # case "$var" in 00:18:35.056 15:37:05 -- accel/accel.sh@19 -- # IFS=: 00:18:35.056 15:37:05 -- accel/accel.sh@19 -- # read -r var val 00:18:35.056 15:37:05 -- accel/accel.sh@20 -- # val= 00:18:35.056 15:37:05 -- accel/accel.sh@21 -- # case "$var" in 00:18:35.056 15:37:05 -- accel/accel.sh@19 -- # IFS=: 00:18:35.056 15:37:05 -- accel/accel.sh@19 -- # read -r var val 00:18:35.056 15:37:05 -- accel/accel.sh@20 -- # val=software 00:18:35.056 15:37:05 -- accel/accel.sh@21 -- # case "$var" in 00:18:35.056 15:37:05 -- accel/accel.sh@22 -- # accel_module=software 00:18:35.056 15:37:05 -- accel/accel.sh@19 -- # IFS=: 00:18:35.056 15:37:05 -- accel/accel.sh@19 -- # read -r var val 00:18:35.056 15:37:05 -- accel/accel.sh@20 -- # val=64 00:18:35.056 15:37:05 -- accel/accel.sh@21 -- # case "$var" in 00:18:35.056 15:37:05 -- accel/accel.sh@19 -- # IFS=: 00:18:35.056 15:37:05 -- accel/accel.sh@19 -- # read -r var val 00:18:35.056 15:37:05 -- accel/accel.sh@20 -- # val=64 00:18:35.056 15:37:05 -- accel/accel.sh@21 -- # case "$var" in 00:18:35.056 15:37:05 -- accel/accel.sh@19 -- # IFS=: 00:18:35.056 15:37:05 -- accel/accel.sh@19 -- # read -r var val 00:18:35.056 15:37:05 -- accel/accel.sh@20 -- # val=1 00:18:35.056 15:37:05 -- accel/accel.sh@21 -- # case "$var" in 00:18:35.056 15:37:05 -- accel/accel.sh@19 -- # IFS=: 00:18:35.056 15:37:05 -- accel/accel.sh@19 -- # read -r var val 00:18:35.056 15:37:05 -- accel/accel.sh@20 -- # val='1 seconds' 00:18:35.056 15:37:05 -- accel/accel.sh@21 -- # case "$var" in 00:18:35.056 15:37:05 -- accel/accel.sh@19 -- # IFS=: 00:18:35.056 15:37:05 -- accel/accel.sh@19 -- # read -r var val 00:18:35.056 15:37:05 -- accel/accel.sh@20 -- # val=Yes 00:18:35.056 15:37:05 -- accel/accel.sh@21 -- # case "$var" in 00:18:35.056 15:37:05 -- accel/accel.sh@19 -- # IFS=: 00:18:35.056 15:37:05 -- accel/accel.sh@19 -- # read -r var val 00:18:35.056 15:37:05 -- accel/accel.sh@20 -- # val= 00:18:35.056 15:37:05 -- accel/accel.sh@21 -- # case "$var" in 00:18:35.056 15:37:05 -- accel/accel.sh@19 -- # IFS=: 00:18:35.056 15:37:05 -- accel/accel.sh@19 -- # read -r var val 00:18:35.056 15:37:05 -- accel/accel.sh@20 -- # val= 00:18:35.056 15:37:05 -- accel/accel.sh@21 -- # case "$var" in 00:18:35.056 15:37:05 -- accel/accel.sh@19 -- # IFS=: 00:18:35.056 15:37:05 -- accel/accel.sh@19 -- # read -r var val 00:18:36.430 15:37:06 -- accel/accel.sh@20 -- # val= 00:18:36.430 15:37:06 -- accel/accel.sh@21 -- # case "$var" in 00:18:36.430 15:37:06 -- accel/accel.sh@19 -- # IFS=: 00:18:36.430 15:37:06 -- accel/accel.sh@19 -- # read -r var val 00:18:36.430 15:37:06 -- accel/accel.sh@20 -- # val= 00:18:36.430 15:37:06 -- accel/accel.sh@21 -- # case "$var" in 00:18:36.430 15:37:06 -- accel/accel.sh@19 -- # IFS=: 00:18:36.430 15:37:06 -- accel/accel.sh@19 -- # read -r var val 00:18:36.430 15:37:06 -- accel/accel.sh@20 -- # val= 00:18:36.430 15:37:06 -- accel/accel.sh@21 -- # case "$var" in 00:18:36.430 15:37:06 -- accel/accel.sh@19 -- # IFS=: 00:18:36.430 15:37:06 -- accel/accel.sh@19 -- # read -r var val 00:18:36.430 15:37:06 -- accel/accel.sh@20 -- # val= 00:18:36.430 15:37:06 -- accel/accel.sh@21 -- # case "$var" in 00:18:36.430 15:37:06 -- accel/accel.sh@19 -- # IFS=: 00:18:36.430 15:37:06 -- accel/accel.sh@19 -- # read -r var val 00:18:36.430 15:37:06 -- accel/accel.sh@20 -- # val= 00:18:36.430 15:37:06 -- accel/accel.sh@21 -- # case "$var" in 00:18:36.430 15:37:06 -- accel/accel.sh@19 -- # IFS=: 00:18:36.430 15:37:06 -- accel/accel.sh@19 -- # read -r var val 00:18:36.430 15:37:06 -- accel/accel.sh@20 -- # val= 00:18:36.430 15:37:06 -- accel/accel.sh@21 -- # case "$var" in 00:18:36.430 15:37:06 -- accel/accel.sh@19 -- # IFS=: 00:18:36.430 15:37:06 -- accel/accel.sh@19 -- # read -r var val 00:18:36.430 15:37:06 -- accel/accel.sh@27 -- # [[ -n software ]] 00:18:36.430 15:37:06 -- accel/accel.sh@27 -- # [[ -n fill ]] 00:18:36.430 15:37:06 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:36.430 ************************************ 00:18:36.430 END TEST accel_fill 00:18:36.430 ************************************ 00:18:36.430 00:18:36.430 real 0m1.556s 00:18:36.430 user 0m1.344s 00:18:36.430 sys 0m0.116s 00:18:36.430 15:37:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:36.430 15:37:06 -- common/autotest_common.sh@10 -- # set +x 00:18:36.430 15:37:06 -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:18:36.430 15:37:06 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:18:36.430 15:37:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:36.430 15:37:06 -- common/autotest_common.sh@10 -- # set +x 00:18:36.430 ************************************ 00:18:36.430 START TEST accel_copy_crc32c 00:18:36.430 ************************************ 00:18:36.430 15:37:06 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y 00:18:36.430 15:37:06 -- accel/accel.sh@16 -- # local accel_opc 00:18:36.430 15:37:06 -- accel/accel.sh@17 -- # local accel_module 00:18:36.430 15:37:06 -- accel/accel.sh@19 -- # IFS=: 00:18:36.430 15:37:06 -- accel/accel.sh@19 -- # read -r var val 00:18:36.430 15:37:06 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:18:36.430 15:37:06 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:18:36.430 15:37:06 -- accel/accel.sh@12 -- # build_accel_config 00:18:36.430 15:37:06 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:36.430 15:37:06 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:36.430 15:37:06 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:36.431 15:37:06 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:36.431 15:37:06 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:36.431 15:37:06 -- accel/accel.sh@40 -- # local IFS=, 00:18:36.431 15:37:06 -- accel/accel.sh@41 -- # jq -r . 00:18:36.431 [2024-04-26 15:37:06.538103] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:18:36.431 [2024-04-26 15:37:06.538211] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63619 ] 00:18:36.431 [2024-04-26 15:37:06.675120] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.689 [2024-04-26 15:37:06.799036] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:36.689 15:37:06 -- accel/accel.sh@20 -- # val= 00:18:36.689 15:37:06 -- accel/accel.sh@21 -- # case "$var" in 00:18:36.689 15:37:06 -- accel/accel.sh@19 -- # IFS=: 00:18:36.689 15:37:06 -- accel/accel.sh@19 -- # read -r var val 00:18:36.689 15:37:06 -- accel/accel.sh@20 -- # val= 00:18:36.689 15:37:06 -- accel/accel.sh@21 -- # case "$var" in 00:18:36.689 15:37:06 -- accel/accel.sh@19 -- # IFS=: 00:18:36.689 15:37:06 -- accel/accel.sh@19 -- # read -r var val 00:18:36.689 15:37:06 -- accel/accel.sh@20 -- # val=0x1 00:18:36.689 15:37:06 -- accel/accel.sh@21 -- # case "$var" in 00:18:36.689 15:37:06 -- accel/accel.sh@19 -- # IFS=: 00:18:36.689 15:37:06 -- accel/accel.sh@19 -- # read -r var val 00:18:36.689 15:37:06 -- accel/accel.sh@20 -- # val= 00:18:36.689 15:37:06 -- accel/accel.sh@21 -- # case "$var" in 00:18:36.689 15:37:06 -- accel/accel.sh@19 -- # IFS=: 00:18:36.689 15:37:06 -- accel/accel.sh@19 -- # read -r var val 00:18:36.689 15:37:06 -- accel/accel.sh@20 -- # val= 00:18:36.689 15:37:06 -- accel/accel.sh@21 -- # case "$var" in 00:18:36.689 15:37:06 -- accel/accel.sh@19 -- # IFS=: 00:18:36.689 15:37:06 -- accel/accel.sh@19 -- # read -r var val 00:18:36.689 15:37:06 -- accel/accel.sh@20 -- # val=copy_crc32c 00:18:36.689 15:37:06 -- accel/accel.sh@21 -- # case "$var" in 00:18:36.689 15:37:06 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:18:36.689 15:37:06 -- accel/accel.sh@19 -- # IFS=: 00:18:36.689 15:37:06 -- accel/accel.sh@19 -- # read -r var val 00:18:36.689 15:37:06 -- accel/accel.sh@20 -- # val=0 00:18:36.689 15:37:06 -- accel/accel.sh@21 -- # case "$var" in 00:18:36.689 15:37:06 -- accel/accel.sh@19 -- # IFS=: 00:18:36.689 15:37:06 -- accel/accel.sh@19 -- # read -r var val 00:18:36.689 15:37:06 -- accel/accel.sh@20 -- # val='4096 bytes' 00:18:36.689 15:37:06 -- accel/accel.sh@21 -- # case "$var" in 00:18:36.689 15:37:06 -- accel/accel.sh@19 -- # IFS=: 00:18:36.689 15:37:06 -- accel/accel.sh@19 -- # read -r var val 00:18:36.689 15:37:06 -- accel/accel.sh@20 -- # val='4096 bytes' 00:18:36.689 15:37:06 -- accel/accel.sh@21 -- # case "$var" in 00:18:36.689 15:37:06 -- accel/accel.sh@19 -- # IFS=: 00:18:36.689 15:37:06 -- accel/accel.sh@19 -- # read -r var val 00:18:36.689 15:37:06 -- accel/accel.sh@20 -- # val= 00:18:36.689 15:37:06 -- accel/accel.sh@21 -- # case "$var" in 00:18:36.689 15:37:06 -- accel/accel.sh@19 -- # IFS=: 00:18:36.689 15:37:06 -- accel/accel.sh@19 -- # read -r var val 00:18:36.689 15:37:06 -- accel/accel.sh@20 -- # val=software 00:18:36.689 15:37:06 -- accel/accel.sh@21 -- # case "$var" in 00:18:36.689 15:37:06 -- accel/accel.sh@22 -- # accel_module=software 00:18:36.689 15:37:06 -- accel/accel.sh@19 -- # IFS=: 00:18:36.689 15:37:06 -- accel/accel.sh@19 -- # read -r var val 00:18:36.689 15:37:06 -- accel/accel.sh@20 -- # val=32 00:18:36.689 15:37:06 -- accel/accel.sh@21 -- # case "$var" in 00:18:36.689 15:37:06 -- accel/accel.sh@19 -- # IFS=: 00:18:36.689 15:37:06 -- accel/accel.sh@19 -- # read -r var val 00:18:36.689 15:37:06 -- accel/accel.sh@20 -- # val=32 00:18:36.689 15:37:06 -- accel/accel.sh@21 -- # case "$var" in 00:18:36.689 15:37:06 -- accel/accel.sh@19 -- # IFS=: 00:18:36.689 15:37:06 -- accel/accel.sh@19 -- # read -r var val 00:18:36.689 15:37:06 -- accel/accel.sh@20 -- # val=1 00:18:36.689 15:37:06 -- accel/accel.sh@21 -- # case "$var" in 00:18:36.689 15:37:06 -- accel/accel.sh@19 -- # IFS=: 00:18:36.689 15:37:06 -- accel/accel.sh@19 -- # read -r var val 00:18:36.689 15:37:06 -- accel/accel.sh@20 -- # val='1 seconds' 00:18:36.689 15:37:06 -- accel/accel.sh@21 -- # case "$var" in 00:18:36.689 15:37:06 -- accel/accel.sh@19 -- # IFS=: 00:18:36.689 15:37:06 -- accel/accel.sh@19 -- # read -r var val 00:18:36.689 15:37:06 -- accel/accel.sh@20 -- # val=Yes 00:18:36.689 15:37:06 -- accel/accel.sh@21 -- # case "$var" in 00:18:36.689 15:37:06 -- accel/accel.sh@19 -- # IFS=: 00:18:36.689 15:37:06 -- accel/accel.sh@19 -- # read -r var val 00:18:36.689 15:37:06 -- accel/accel.sh@20 -- # val= 00:18:36.689 15:37:06 -- accel/accel.sh@21 -- # case "$var" in 00:18:36.689 15:37:06 -- accel/accel.sh@19 -- # IFS=: 00:18:36.689 15:37:06 -- accel/accel.sh@19 -- # read -r var val 00:18:36.689 15:37:06 -- accel/accel.sh@20 -- # val= 00:18:36.689 15:37:06 -- accel/accel.sh@21 -- # case "$var" in 00:18:36.689 15:37:06 -- accel/accel.sh@19 -- # IFS=: 00:18:36.689 15:37:06 -- accel/accel.sh@19 -- # read -r var val 00:18:38.064 15:37:08 -- accel/accel.sh@20 -- # val= 00:18:38.064 15:37:08 -- accel/accel.sh@21 -- # case "$var" in 00:18:38.064 15:37:08 -- accel/accel.sh@19 -- # IFS=: 00:18:38.064 15:37:08 -- accel/accel.sh@19 -- # read -r var val 00:18:38.064 15:37:08 -- accel/accel.sh@20 -- # val= 00:18:38.064 15:37:08 -- accel/accel.sh@21 -- # case "$var" in 00:18:38.064 15:37:08 -- accel/accel.sh@19 -- # IFS=: 00:18:38.064 15:37:08 -- accel/accel.sh@19 -- # read -r var val 00:18:38.064 15:37:08 -- accel/accel.sh@20 -- # val= 00:18:38.064 15:37:08 -- accel/accel.sh@21 -- # case "$var" in 00:18:38.064 15:37:08 -- accel/accel.sh@19 -- # IFS=: 00:18:38.064 15:37:08 -- accel/accel.sh@19 -- # read -r var val 00:18:38.064 15:37:08 -- accel/accel.sh@20 -- # val= 00:18:38.064 15:37:08 -- accel/accel.sh@21 -- # case "$var" in 00:18:38.064 15:37:08 -- accel/accel.sh@19 -- # IFS=: 00:18:38.064 15:37:08 -- accel/accel.sh@19 -- # read -r var val 00:18:38.064 15:37:08 -- accel/accel.sh@20 -- # val= 00:18:38.064 15:37:08 -- accel/accel.sh@21 -- # case "$var" in 00:18:38.064 15:37:08 -- accel/accel.sh@19 -- # IFS=: 00:18:38.064 ************************************ 00:18:38.064 END TEST accel_copy_crc32c 00:18:38.064 ************************************ 00:18:38.064 15:37:08 -- accel/accel.sh@19 -- # read -r var val 00:18:38.064 15:37:08 -- accel/accel.sh@20 -- # val= 00:18:38.064 15:37:08 -- accel/accel.sh@21 -- # case "$var" in 00:18:38.064 15:37:08 -- accel/accel.sh@19 -- # IFS=: 00:18:38.064 15:37:08 -- accel/accel.sh@19 -- # read -r var val 00:18:38.064 15:37:08 -- accel/accel.sh@27 -- # [[ -n software ]] 00:18:38.064 15:37:08 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:18:38.064 15:37:08 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:38.064 00:18:38.064 real 0m1.550s 00:18:38.064 user 0m1.341s 00:18:38.064 sys 0m0.112s 00:18:38.064 15:37:08 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:38.064 15:37:08 -- common/autotest_common.sh@10 -- # set +x 00:18:38.064 15:37:08 -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:18:38.064 15:37:08 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:18:38.064 15:37:08 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:38.064 15:37:08 -- common/autotest_common.sh@10 -- # set +x 00:18:38.064 ************************************ 00:18:38.064 START TEST accel_copy_crc32c_C2 00:18:38.064 ************************************ 00:18:38.064 15:37:08 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:18:38.064 15:37:08 -- accel/accel.sh@16 -- # local accel_opc 00:18:38.064 15:37:08 -- accel/accel.sh@17 -- # local accel_module 00:18:38.064 15:37:08 -- accel/accel.sh@19 -- # IFS=: 00:18:38.064 15:37:08 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:18:38.064 15:37:08 -- accel/accel.sh@19 -- # read -r var val 00:18:38.064 15:37:08 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:18:38.064 15:37:08 -- accel/accel.sh@12 -- # build_accel_config 00:18:38.064 15:37:08 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:38.064 15:37:08 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:38.064 15:37:08 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:38.064 15:37:08 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:38.064 15:37:08 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:38.064 15:37:08 -- accel/accel.sh@40 -- # local IFS=, 00:18:38.064 15:37:08 -- accel/accel.sh@41 -- # jq -r . 00:18:38.064 [2024-04-26 15:37:08.210745] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:18:38.064 [2024-04-26 15:37:08.210834] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63663 ] 00:18:38.064 [2024-04-26 15:37:08.348451] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.322 [2024-04-26 15:37:08.466637] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:38.322 15:37:08 -- accel/accel.sh@20 -- # val= 00:18:38.322 15:37:08 -- accel/accel.sh@21 -- # case "$var" in 00:18:38.322 15:37:08 -- accel/accel.sh@19 -- # IFS=: 00:18:38.322 15:37:08 -- accel/accel.sh@19 -- # read -r var val 00:18:38.322 15:37:08 -- accel/accel.sh@20 -- # val= 00:18:38.322 15:37:08 -- accel/accel.sh@21 -- # case "$var" in 00:18:38.322 15:37:08 -- accel/accel.sh@19 -- # IFS=: 00:18:38.322 15:37:08 -- accel/accel.sh@19 -- # read -r var val 00:18:38.322 15:37:08 -- accel/accel.sh@20 -- # val=0x1 00:18:38.322 15:37:08 -- accel/accel.sh@21 -- # case "$var" in 00:18:38.322 15:37:08 -- accel/accel.sh@19 -- # IFS=: 00:18:38.322 15:37:08 -- accel/accel.sh@19 -- # read -r var val 00:18:38.322 15:37:08 -- accel/accel.sh@20 -- # val= 00:18:38.322 15:37:08 -- accel/accel.sh@21 -- # case "$var" in 00:18:38.322 15:37:08 -- accel/accel.sh@19 -- # IFS=: 00:18:38.322 15:37:08 -- accel/accel.sh@19 -- # read -r var val 00:18:38.322 15:37:08 -- accel/accel.sh@20 -- # val= 00:18:38.322 15:37:08 -- accel/accel.sh@21 -- # case "$var" in 00:18:38.322 15:37:08 -- accel/accel.sh@19 -- # IFS=: 00:18:38.322 15:37:08 -- accel/accel.sh@19 -- # read -r var val 00:18:38.322 15:37:08 -- accel/accel.sh@20 -- # val=copy_crc32c 00:18:38.322 15:37:08 -- accel/accel.sh@21 -- # case "$var" in 00:18:38.322 15:37:08 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:18:38.322 15:37:08 -- accel/accel.sh@19 -- # IFS=: 00:18:38.322 15:37:08 -- accel/accel.sh@19 -- # read -r var val 00:18:38.322 15:37:08 -- accel/accel.sh@20 -- # val=0 00:18:38.322 15:37:08 -- accel/accel.sh@21 -- # case "$var" in 00:18:38.322 15:37:08 -- accel/accel.sh@19 -- # IFS=: 00:18:38.322 15:37:08 -- accel/accel.sh@19 -- # read -r var val 00:18:38.322 15:37:08 -- accel/accel.sh@20 -- # val='4096 bytes' 00:18:38.322 15:37:08 -- accel/accel.sh@21 -- # case "$var" in 00:18:38.322 15:37:08 -- accel/accel.sh@19 -- # IFS=: 00:18:38.322 15:37:08 -- accel/accel.sh@19 -- # read -r var val 00:18:38.322 15:37:08 -- accel/accel.sh@20 -- # val='8192 bytes' 00:18:38.322 15:37:08 -- accel/accel.sh@21 -- # case "$var" in 00:18:38.322 15:37:08 -- accel/accel.sh@19 -- # IFS=: 00:18:38.322 15:37:08 -- accel/accel.sh@19 -- # read -r var val 00:18:38.322 15:37:08 -- accel/accel.sh@20 -- # val= 00:18:38.322 15:37:08 -- accel/accel.sh@21 -- # case "$var" in 00:18:38.322 15:37:08 -- accel/accel.sh@19 -- # IFS=: 00:18:38.322 15:37:08 -- accel/accel.sh@19 -- # read -r var val 00:18:38.322 15:37:08 -- accel/accel.sh@20 -- # val=software 00:18:38.322 15:37:08 -- accel/accel.sh@21 -- # case "$var" in 00:18:38.322 15:37:08 -- accel/accel.sh@22 -- # accel_module=software 00:18:38.322 15:37:08 -- accel/accel.sh@19 -- # IFS=: 00:18:38.322 15:37:08 -- accel/accel.sh@19 -- # read -r var val 00:18:38.322 15:37:08 -- accel/accel.sh@20 -- # val=32 00:18:38.322 15:37:08 -- accel/accel.sh@21 -- # case "$var" in 00:18:38.322 15:37:08 -- accel/accel.sh@19 -- # IFS=: 00:18:38.322 15:37:08 -- accel/accel.sh@19 -- # read -r var val 00:18:38.322 15:37:08 -- accel/accel.sh@20 -- # val=32 00:18:38.322 15:37:08 -- accel/accel.sh@21 -- # case "$var" in 00:18:38.322 15:37:08 -- accel/accel.sh@19 -- # IFS=: 00:18:38.322 15:37:08 -- accel/accel.sh@19 -- # read -r var val 00:18:38.322 15:37:08 -- accel/accel.sh@20 -- # val=1 00:18:38.322 15:37:08 -- accel/accel.sh@21 -- # case "$var" in 00:18:38.322 15:37:08 -- accel/accel.sh@19 -- # IFS=: 00:18:38.322 15:37:08 -- accel/accel.sh@19 -- # read -r var val 00:18:38.322 15:37:08 -- accel/accel.sh@20 -- # val='1 seconds' 00:18:38.322 15:37:08 -- accel/accel.sh@21 -- # case "$var" in 00:18:38.322 15:37:08 -- accel/accel.sh@19 -- # IFS=: 00:18:38.322 15:37:08 -- accel/accel.sh@19 -- # read -r var val 00:18:38.322 15:37:08 -- accel/accel.sh@20 -- # val=Yes 00:18:38.322 15:37:08 -- accel/accel.sh@21 -- # case "$var" in 00:18:38.322 15:37:08 -- accel/accel.sh@19 -- # IFS=: 00:18:38.322 15:37:08 -- accel/accel.sh@19 -- # read -r var val 00:18:38.322 15:37:08 -- accel/accel.sh@20 -- # val= 00:18:38.322 15:37:08 -- accel/accel.sh@21 -- # case "$var" in 00:18:38.322 15:37:08 -- accel/accel.sh@19 -- # IFS=: 00:18:38.322 15:37:08 -- accel/accel.sh@19 -- # read -r var val 00:18:38.322 15:37:08 -- accel/accel.sh@20 -- # val= 00:18:38.322 15:37:08 -- accel/accel.sh@21 -- # case "$var" in 00:18:38.322 15:37:08 -- accel/accel.sh@19 -- # IFS=: 00:18:38.322 15:37:08 -- accel/accel.sh@19 -- # read -r var val 00:18:39.698 15:37:09 -- accel/accel.sh@20 -- # val= 00:18:39.698 15:37:09 -- accel/accel.sh@21 -- # case "$var" in 00:18:39.698 15:37:09 -- accel/accel.sh@19 -- # IFS=: 00:18:39.698 15:37:09 -- accel/accel.sh@19 -- # read -r var val 00:18:39.698 15:37:09 -- accel/accel.sh@20 -- # val= 00:18:39.698 15:37:09 -- accel/accel.sh@21 -- # case "$var" in 00:18:39.698 15:37:09 -- accel/accel.sh@19 -- # IFS=: 00:18:39.698 15:37:09 -- accel/accel.sh@19 -- # read -r var val 00:18:39.698 15:37:09 -- accel/accel.sh@20 -- # val= 00:18:39.698 15:37:09 -- accel/accel.sh@21 -- # case "$var" in 00:18:39.698 15:37:09 -- accel/accel.sh@19 -- # IFS=: 00:18:39.698 15:37:09 -- accel/accel.sh@19 -- # read -r var val 00:18:39.698 15:37:09 -- accel/accel.sh@20 -- # val= 00:18:39.698 ************************************ 00:18:39.698 END TEST accel_copy_crc32c_C2 00:18:39.698 ************************************ 00:18:39.698 15:37:09 -- accel/accel.sh@21 -- # case "$var" in 00:18:39.698 15:37:09 -- accel/accel.sh@19 -- # IFS=: 00:18:39.698 15:37:09 -- accel/accel.sh@19 -- # read -r var val 00:18:39.698 15:37:09 -- accel/accel.sh@20 -- # val= 00:18:39.698 15:37:09 -- accel/accel.sh@21 -- # case "$var" in 00:18:39.698 15:37:09 -- accel/accel.sh@19 -- # IFS=: 00:18:39.698 15:37:09 -- accel/accel.sh@19 -- # read -r var val 00:18:39.698 15:37:09 -- accel/accel.sh@20 -- # val= 00:18:39.698 15:37:09 -- accel/accel.sh@21 -- # case "$var" in 00:18:39.698 15:37:09 -- accel/accel.sh@19 -- # IFS=: 00:18:39.698 15:37:09 -- accel/accel.sh@19 -- # read -r var val 00:18:39.698 15:37:09 -- accel/accel.sh@27 -- # [[ -n software ]] 00:18:39.698 15:37:09 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:18:39.698 15:37:09 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:39.698 00:18:39.698 real 0m1.530s 00:18:39.698 user 0m1.320s 00:18:39.698 sys 0m0.116s 00:18:39.698 15:37:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:39.698 15:37:09 -- common/autotest_common.sh@10 -- # set +x 00:18:39.698 15:37:09 -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:18:39.698 15:37:09 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:18:39.698 15:37:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:39.698 15:37:09 -- common/autotest_common.sh@10 -- # set +x 00:18:39.698 ************************************ 00:18:39.698 START TEST accel_dualcast 00:18:39.698 ************************************ 00:18:39.698 15:37:09 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dualcast -y 00:18:39.698 15:37:09 -- accel/accel.sh@16 -- # local accel_opc 00:18:39.698 15:37:09 -- accel/accel.sh@17 -- # local accel_module 00:18:39.698 15:37:09 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:18:39.698 15:37:09 -- accel/accel.sh@19 -- # IFS=: 00:18:39.698 15:37:09 -- accel/accel.sh@19 -- # read -r var val 00:18:39.698 15:37:09 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:18:39.698 15:37:09 -- accel/accel.sh@12 -- # build_accel_config 00:18:39.698 15:37:09 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:39.698 15:37:09 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:39.698 15:37:09 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:39.698 15:37:09 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:39.698 15:37:09 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:39.698 15:37:09 -- accel/accel.sh@40 -- # local IFS=, 00:18:39.698 15:37:09 -- accel/accel.sh@41 -- # jq -r . 00:18:39.698 [2024-04-26 15:37:09.847479] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:18:39.698 [2024-04-26 15:37:09.847704] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63701 ] 00:18:39.698 [2024-04-26 15:37:09.981365] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:39.956 [2024-04-26 15:37:10.121344] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.956 15:37:10 -- accel/accel.sh@20 -- # val= 00:18:39.956 15:37:10 -- accel/accel.sh@21 -- # case "$var" in 00:18:39.956 15:37:10 -- accel/accel.sh@19 -- # IFS=: 00:18:39.956 15:37:10 -- accel/accel.sh@19 -- # read -r var val 00:18:39.956 15:37:10 -- accel/accel.sh@20 -- # val= 00:18:39.956 15:37:10 -- accel/accel.sh@21 -- # case "$var" in 00:18:39.956 15:37:10 -- accel/accel.sh@19 -- # IFS=: 00:18:39.956 15:37:10 -- accel/accel.sh@19 -- # read -r var val 00:18:39.956 15:37:10 -- accel/accel.sh@20 -- # val=0x1 00:18:39.956 15:37:10 -- accel/accel.sh@21 -- # case "$var" in 00:18:39.956 15:37:10 -- accel/accel.sh@19 -- # IFS=: 00:18:39.956 15:37:10 -- accel/accel.sh@19 -- # read -r var val 00:18:39.956 15:37:10 -- accel/accel.sh@20 -- # val= 00:18:39.956 15:37:10 -- accel/accel.sh@21 -- # case "$var" in 00:18:39.956 15:37:10 -- accel/accel.sh@19 -- # IFS=: 00:18:39.956 15:37:10 -- accel/accel.sh@19 -- # read -r var val 00:18:39.956 15:37:10 -- accel/accel.sh@20 -- # val= 00:18:39.956 15:37:10 -- accel/accel.sh@21 -- # case "$var" in 00:18:39.956 15:37:10 -- accel/accel.sh@19 -- # IFS=: 00:18:39.956 15:37:10 -- accel/accel.sh@19 -- # read -r var val 00:18:39.956 15:37:10 -- accel/accel.sh@20 -- # val=dualcast 00:18:39.956 15:37:10 -- accel/accel.sh@21 -- # case "$var" in 00:18:39.956 15:37:10 -- accel/accel.sh@23 -- # accel_opc=dualcast 00:18:39.956 15:37:10 -- accel/accel.sh@19 -- # IFS=: 00:18:39.956 15:37:10 -- accel/accel.sh@19 -- # read -r var val 00:18:39.956 15:37:10 -- accel/accel.sh@20 -- # val='4096 bytes' 00:18:39.956 15:37:10 -- accel/accel.sh@21 -- # case "$var" in 00:18:39.956 15:37:10 -- accel/accel.sh@19 -- # IFS=: 00:18:39.956 15:37:10 -- accel/accel.sh@19 -- # read -r var val 00:18:39.956 15:37:10 -- accel/accel.sh@20 -- # val= 00:18:39.956 15:37:10 -- accel/accel.sh@21 -- # case "$var" in 00:18:39.956 15:37:10 -- accel/accel.sh@19 -- # IFS=: 00:18:39.956 15:37:10 -- accel/accel.sh@19 -- # read -r var val 00:18:39.956 15:37:10 -- accel/accel.sh@20 -- # val=software 00:18:39.956 15:37:10 -- accel/accel.sh@21 -- # case "$var" in 00:18:39.956 15:37:10 -- accel/accel.sh@22 -- # accel_module=software 00:18:39.956 15:37:10 -- accel/accel.sh@19 -- # IFS=: 00:18:39.956 15:37:10 -- accel/accel.sh@19 -- # read -r var val 00:18:39.956 15:37:10 -- accel/accel.sh@20 -- # val=32 00:18:39.956 15:37:10 -- accel/accel.sh@21 -- # case "$var" in 00:18:39.956 15:37:10 -- accel/accel.sh@19 -- # IFS=: 00:18:39.956 15:37:10 -- accel/accel.sh@19 -- # read -r var val 00:18:39.956 15:37:10 -- accel/accel.sh@20 -- # val=32 00:18:39.956 15:37:10 -- accel/accel.sh@21 -- # case "$var" in 00:18:39.956 15:37:10 -- accel/accel.sh@19 -- # IFS=: 00:18:39.956 15:37:10 -- accel/accel.sh@19 -- # read -r var val 00:18:39.956 15:37:10 -- accel/accel.sh@20 -- # val=1 00:18:39.956 15:37:10 -- accel/accel.sh@21 -- # case "$var" in 00:18:39.956 15:37:10 -- accel/accel.sh@19 -- # IFS=: 00:18:39.956 15:37:10 -- accel/accel.sh@19 -- # read -r var val 00:18:39.956 15:37:10 -- accel/accel.sh@20 -- # val='1 seconds' 00:18:39.956 15:37:10 -- accel/accel.sh@21 -- # case "$var" in 00:18:39.956 15:37:10 -- accel/accel.sh@19 -- # IFS=: 00:18:39.956 15:37:10 -- accel/accel.sh@19 -- # read -r var val 00:18:39.956 15:37:10 -- accel/accel.sh@20 -- # val=Yes 00:18:39.956 15:37:10 -- accel/accel.sh@21 -- # case "$var" in 00:18:39.956 15:37:10 -- accel/accel.sh@19 -- # IFS=: 00:18:39.956 15:37:10 -- accel/accel.sh@19 -- # read -r var val 00:18:39.956 15:37:10 -- accel/accel.sh@20 -- # val= 00:18:39.956 15:37:10 -- accel/accel.sh@21 -- # case "$var" in 00:18:39.956 15:37:10 -- accel/accel.sh@19 -- # IFS=: 00:18:39.956 15:37:10 -- accel/accel.sh@19 -- # read -r var val 00:18:39.956 15:37:10 -- accel/accel.sh@20 -- # val= 00:18:39.956 15:37:10 -- accel/accel.sh@21 -- # case "$var" in 00:18:39.956 15:37:10 -- accel/accel.sh@19 -- # IFS=: 00:18:39.956 15:37:10 -- accel/accel.sh@19 -- # read -r var val 00:18:41.329 15:37:11 -- accel/accel.sh@20 -- # val= 00:18:41.329 15:37:11 -- accel/accel.sh@21 -- # case "$var" in 00:18:41.329 15:37:11 -- accel/accel.sh@19 -- # IFS=: 00:18:41.329 15:37:11 -- accel/accel.sh@19 -- # read -r var val 00:18:41.329 15:37:11 -- accel/accel.sh@20 -- # val= 00:18:41.329 15:37:11 -- accel/accel.sh@21 -- # case "$var" in 00:18:41.329 15:37:11 -- accel/accel.sh@19 -- # IFS=: 00:18:41.329 15:37:11 -- accel/accel.sh@19 -- # read -r var val 00:18:41.329 15:37:11 -- accel/accel.sh@20 -- # val= 00:18:41.329 15:37:11 -- accel/accel.sh@21 -- # case "$var" in 00:18:41.329 15:37:11 -- accel/accel.sh@19 -- # IFS=: 00:18:41.329 15:37:11 -- accel/accel.sh@19 -- # read -r var val 00:18:41.329 15:37:11 -- accel/accel.sh@20 -- # val= 00:18:41.329 15:37:11 -- accel/accel.sh@21 -- # case "$var" in 00:18:41.329 15:37:11 -- accel/accel.sh@19 -- # IFS=: 00:18:41.329 15:37:11 -- accel/accel.sh@19 -- # read -r var val 00:18:41.329 15:37:11 -- accel/accel.sh@20 -- # val= 00:18:41.329 15:37:11 -- accel/accel.sh@21 -- # case "$var" in 00:18:41.329 15:37:11 -- accel/accel.sh@19 -- # IFS=: 00:18:41.329 15:37:11 -- accel/accel.sh@19 -- # read -r var val 00:18:41.329 15:37:11 -- accel/accel.sh@20 -- # val= 00:18:41.329 ************************************ 00:18:41.329 END TEST accel_dualcast 00:18:41.329 ************************************ 00:18:41.329 15:37:11 -- accel/accel.sh@21 -- # case "$var" in 00:18:41.329 15:37:11 -- accel/accel.sh@19 -- # IFS=: 00:18:41.329 15:37:11 -- accel/accel.sh@19 -- # read -r var val 00:18:41.329 15:37:11 -- accel/accel.sh@27 -- # [[ -n software ]] 00:18:41.329 15:37:11 -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:18:41.329 15:37:11 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:41.329 00:18:41.329 real 0m1.538s 00:18:41.329 user 0m1.332s 00:18:41.329 sys 0m0.106s 00:18:41.329 15:37:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:41.329 15:37:11 -- common/autotest_common.sh@10 -- # set +x 00:18:41.329 15:37:11 -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:18:41.329 15:37:11 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:18:41.329 15:37:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:41.329 15:37:11 -- common/autotest_common.sh@10 -- # set +x 00:18:41.329 ************************************ 00:18:41.329 START TEST accel_compare 00:18:41.329 ************************************ 00:18:41.329 15:37:11 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compare -y 00:18:41.329 15:37:11 -- accel/accel.sh@16 -- # local accel_opc 00:18:41.329 15:37:11 -- accel/accel.sh@17 -- # local accel_module 00:18:41.329 15:37:11 -- accel/accel.sh@19 -- # IFS=: 00:18:41.329 15:37:11 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:18:41.329 15:37:11 -- accel/accel.sh@19 -- # read -r var val 00:18:41.329 15:37:11 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:18:41.329 15:37:11 -- accel/accel.sh@12 -- # build_accel_config 00:18:41.329 15:37:11 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:41.329 15:37:11 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:41.329 15:37:11 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:41.329 15:37:11 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:41.329 15:37:11 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:41.329 15:37:11 -- accel/accel.sh@40 -- # local IFS=, 00:18:41.329 15:37:11 -- accel/accel.sh@41 -- # jq -r . 00:18:41.329 [2024-04-26 15:37:11.506398] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:18:41.329 [2024-04-26 15:37:11.506490] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63742 ] 00:18:41.586 [2024-04-26 15:37:11.642209] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:41.586 [2024-04-26 15:37:11.759511] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:41.586 15:37:11 -- accel/accel.sh@20 -- # val= 00:18:41.586 15:37:11 -- accel/accel.sh@21 -- # case "$var" in 00:18:41.586 15:37:11 -- accel/accel.sh@19 -- # IFS=: 00:18:41.586 15:37:11 -- accel/accel.sh@19 -- # read -r var val 00:18:41.586 15:37:11 -- accel/accel.sh@20 -- # val= 00:18:41.586 15:37:11 -- accel/accel.sh@21 -- # case "$var" in 00:18:41.586 15:37:11 -- accel/accel.sh@19 -- # IFS=: 00:18:41.586 15:37:11 -- accel/accel.sh@19 -- # read -r var val 00:18:41.586 15:37:11 -- accel/accel.sh@20 -- # val=0x1 00:18:41.586 15:37:11 -- accel/accel.sh@21 -- # case "$var" in 00:18:41.586 15:37:11 -- accel/accel.sh@19 -- # IFS=: 00:18:41.586 15:37:11 -- accel/accel.sh@19 -- # read -r var val 00:18:41.586 15:37:11 -- accel/accel.sh@20 -- # val= 00:18:41.586 15:37:11 -- accel/accel.sh@21 -- # case "$var" in 00:18:41.586 15:37:11 -- accel/accel.sh@19 -- # IFS=: 00:18:41.586 15:37:11 -- accel/accel.sh@19 -- # read -r var val 00:18:41.586 15:37:11 -- accel/accel.sh@20 -- # val= 00:18:41.586 15:37:11 -- accel/accel.sh@21 -- # case "$var" in 00:18:41.586 15:37:11 -- accel/accel.sh@19 -- # IFS=: 00:18:41.586 15:37:11 -- accel/accel.sh@19 -- # read -r var val 00:18:41.586 15:37:11 -- accel/accel.sh@20 -- # val=compare 00:18:41.586 15:37:11 -- accel/accel.sh@21 -- # case "$var" in 00:18:41.586 15:37:11 -- accel/accel.sh@23 -- # accel_opc=compare 00:18:41.586 15:37:11 -- accel/accel.sh@19 -- # IFS=: 00:18:41.586 15:37:11 -- accel/accel.sh@19 -- # read -r var val 00:18:41.586 15:37:11 -- accel/accel.sh@20 -- # val='4096 bytes' 00:18:41.586 15:37:11 -- accel/accel.sh@21 -- # case "$var" in 00:18:41.586 15:37:11 -- accel/accel.sh@19 -- # IFS=: 00:18:41.586 15:37:11 -- accel/accel.sh@19 -- # read -r var val 00:18:41.586 15:37:11 -- accel/accel.sh@20 -- # val= 00:18:41.586 15:37:11 -- accel/accel.sh@21 -- # case "$var" in 00:18:41.586 15:37:11 -- accel/accel.sh@19 -- # IFS=: 00:18:41.586 15:37:11 -- accel/accel.sh@19 -- # read -r var val 00:18:41.586 15:37:11 -- accel/accel.sh@20 -- # val=software 00:18:41.586 15:37:11 -- accel/accel.sh@21 -- # case "$var" in 00:18:41.586 15:37:11 -- accel/accel.sh@22 -- # accel_module=software 00:18:41.586 15:37:11 -- accel/accel.sh@19 -- # IFS=: 00:18:41.586 15:37:11 -- accel/accel.sh@19 -- # read -r var val 00:18:41.586 15:37:11 -- accel/accel.sh@20 -- # val=32 00:18:41.586 15:37:11 -- accel/accel.sh@21 -- # case "$var" in 00:18:41.586 15:37:11 -- accel/accel.sh@19 -- # IFS=: 00:18:41.586 15:37:11 -- accel/accel.sh@19 -- # read -r var val 00:18:41.586 15:37:11 -- accel/accel.sh@20 -- # val=32 00:18:41.586 15:37:11 -- accel/accel.sh@21 -- # case "$var" in 00:18:41.586 15:37:11 -- accel/accel.sh@19 -- # IFS=: 00:18:41.586 15:37:11 -- accel/accel.sh@19 -- # read -r var val 00:18:41.586 15:37:11 -- accel/accel.sh@20 -- # val=1 00:18:41.586 15:37:11 -- accel/accel.sh@21 -- # case "$var" in 00:18:41.586 15:37:11 -- accel/accel.sh@19 -- # IFS=: 00:18:41.586 15:37:11 -- accel/accel.sh@19 -- # read -r var val 00:18:41.586 15:37:11 -- accel/accel.sh@20 -- # val='1 seconds' 00:18:41.586 15:37:11 -- accel/accel.sh@21 -- # case "$var" in 00:18:41.586 15:37:11 -- accel/accel.sh@19 -- # IFS=: 00:18:41.586 15:37:11 -- accel/accel.sh@19 -- # read -r var val 00:18:41.586 15:37:11 -- accel/accel.sh@20 -- # val=Yes 00:18:41.586 15:37:11 -- accel/accel.sh@21 -- # case "$var" in 00:18:41.586 15:37:11 -- accel/accel.sh@19 -- # IFS=: 00:18:41.586 15:37:11 -- accel/accel.sh@19 -- # read -r var val 00:18:41.586 15:37:11 -- accel/accel.sh@20 -- # val= 00:18:41.586 15:37:11 -- accel/accel.sh@21 -- # case "$var" in 00:18:41.586 15:37:11 -- accel/accel.sh@19 -- # IFS=: 00:18:41.586 15:37:11 -- accel/accel.sh@19 -- # read -r var val 00:18:41.586 15:37:11 -- accel/accel.sh@20 -- # val= 00:18:41.587 15:37:11 -- accel/accel.sh@21 -- # case "$var" in 00:18:41.587 15:37:11 -- accel/accel.sh@19 -- # IFS=: 00:18:41.587 15:37:11 -- accel/accel.sh@19 -- # read -r var val 00:18:42.954 15:37:13 -- accel/accel.sh@20 -- # val= 00:18:42.954 15:37:13 -- accel/accel.sh@21 -- # case "$var" in 00:18:42.954 15:37:13 -- accel/accel.sh@19 -- # IFS=: 00:18:42.954 15:37:13 -- accel/accel.sh@19 -- # read -r var val 00:18:42.954 15:37:13 -- accel/accel.sh@20 -- # val= 00:18:42.954 15:37:13 -- accel/accel.sh@21 -- # case "$var" in 00:18:42.954 15:37:13 -- accel/accel.sh@19 -- # IFS=: 00:18:42.954 15:37:13 -- accel/accel.sh@19 -- # read -r var val 00:18:42.954 15:37:13 -- accel/accel.sh@20 -- # val= 00:18:42.954 15:37:13 -- accel/accel.sh@21 -- # case "$var" in 00:18:42.954 15:37:13 -- accel/accel.sh@19 -- # IFS=: 00:18:42.954 15:37:13 -- accel/accel.sh@19 -- # read -r var val 00:18:42.954 15:37:13 -- accel/accel.sh@20 -- # val= 00:18:42.954 15:37:13 -- accel/accel.sh@21 -- # case "$var" in 00:18:42.954 15:37:13 -- accel/accel.sh@19 -- # IFS=: 00:18:42.954 15:37:13 -- accel/accel.sh@19 -- # read -r var val 00:18:42.954 15:37:13 -- accel/accel.sh@20 -- # val= 00:18:42.954 15:37:13 -- accel/accel.sh@21 -- # case "$var" in 00:18:42.954 15:37:13 -- accel/accel.sh@19 -- # IFS=: 00:18:42.954 15:37:13 -- accel/accel.sh@19 -- # read -r var val 00:18:42.954 15:37:13 -- accel/accel.sh@20 -- # val= 00:18:42.954 15:37:13 -- accel/accel.sh@21 -- # case "$var" in 00:18:42.954 15:37:13 -- accel/accel.sh@19 -- # IFS=: 00:18:42.954 15:37:13 -- accel/accel.sh@19 -- # read -r var val 00:18:42.954 ************************************ 00:18:42.954 END TEST accel_compare 00:18:42.954 15:37:13 -- accel/accel.sh@27 -- # [[ -n software ]] 00:18:42.954 15:37:13 -- accel/accel.sh@27 -- # [[ -n compare ]] 00:18:42.954 15:37:13 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:42.954 00:18:42.954 real 0m1.522s 00:18:42.954 user 0m0.013s 00:18:42.954 sys 0m0.004s 00:18:42.954 15:37:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:42.954 15:37:13 -- common/autotest_common.sh@10 -- # set +x 00:18:42.954 ************************************ 00:18:42.954 15:37:13 -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:18:42.954 15:37:13 -- common/autotest_common.sh@1087 -- # '[' 7 -le 1 ']' 00:18:42.954 15:37:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:42.954 15:37:13 -- common/autotest_common.sh@10 -- # set +x 00:18:42.954 ************************************ 00:18:42.954 START TEST accel_xor 00:18:42.954 ************************************ 00:18:42.954 15:37:13 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y 00:18:42.954 15:37:13 -- accel/accel.sh@16 -- # local accel_opc 00:18:42.954 15:37:13 -- accel/accel.sh@17 -- # local accel_module 00:18:42.954 15:37:13 -- accel/accel.sh@19 -- # IFS=: 00:18:42.954 15:37:13 -- accel/accel.sh@19 -- # read -r var val 00:18:42.954 15:37:13 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:18:42.954 15:37:13 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:18:42.954 15:37:13 -- accel/accel.sh@12 -- # build_accel_config 00:18:42.954 15:37:13 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:42.955 15:37:13 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:42.955 15:37:13 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:42.955 15:37:13 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:42.955 15:37:13 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:42.955 15:37:13 -- accel/accel.sh@40 -- # local IFS=, 00:18:42.955 15:37:13 -- accel/accel.sh@41 -- # jq -r . 00:18:42.955 [2024-04-26 15:37:13.143722] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:18:42.955 [2024-04-26 15:37:13.143868] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63785 ] 00:18:43.267 [2024-04-26 15:37:13.290321] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.267 [2024-04-26 15:37:13.407658] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:43.267 15:37:13 -- accel/accel.sh@20 -- # val= 00:18:43.267 15:37:13 -- accel/accel.sh@21 -- # case "$var" in 00:18:43.267 15:37:13 -- accel/accel.sh@19 -- # IFS=: 00:18:43.267 15:37:13 -- accel/accel.sh@19 -- # read -r var val 00:18:43.267 15:37:13 -- accel/accel.sh@20 -- # val= 00:18:43.267 15:37:13 -- accel/accel.sh@21 -- # case "$var" in 00:18:43.267 15:37:13 -- accel/accel.sh@19 -- # IFS=: 00:18:43.267 15:37:13 -- accel/accel.sh@19 -- # read -r var val 00:18:43.267 15:37:13 -- accel/accel.sh@20 -- # val=0x1 00:18:43.267 15:37:13 -- accel/accel.sh@21 -- # case "$var" in 00:18:43.267 15:37:13 -- accel/accel.sh@19 -- # IFS=: 00:18:43.267 15:37:13 -- accel/accel.sh@19 -- # read -r var val 00:18:43.267 15:37:13 -- accel/accel.sh@20 -- # val= 00:18:43.267 15:37:13 -- accel/accel.sh@21 -- # case "$var" in 00:18:43.267 15:37:13 -- accel/accel.sh@19 -- # IFS=: 00:18:43.267 15:37:13 -- accel/accel.sh@19 -- # read -r var val 00:18:43.267 15:37:13 -- accel/accel.sh@20 -- # val= 00:18:43.267 15:37:13 -- accel/accel.sh@21 -- # case "$var" in 00:18:43.267 15:37:13 -- accel/accel.sh@19 -- # IFS=: 00:18:43.267 15:37:13 -- accel/accel.sh@19 -- # read -r var val 00:18:43.267 15:37:13 -- accel/accel.sh@20 -- # val=xor 00:18:43.267 15:37:13 -- accel/accel.sh@21 -- # case "$var" in 00:18:43.267 15:37:13 -- accel/accel.sh@23 -- # accel_opc=xor 00:18:43.267 15:37:13 -- accel/accel.sh@19 -- # IFS=: 00:18:43.267 15:37:13 -- accel/accel.sh@19 -- # read -r var val 00:18:43.267 15:37:13 -- accel/accel.sh@20 -- # val=2 00:18:43.267 15:37:13 -- accel/accel.sh@21 -- # case "$var" in 00:18:43.267 15:37:13 -- accel/accel.sh@19 -- # IFS=: 00:18:43.267 15:37:13 -- accel/accel.sh@19 -- # read -r var val 00:18:43.267 15:37:13 -- accel/accel.sh@20 -- # val='4096 bytes' 00:18:43.267 15:37:13 -- accel/accel.sh@21 -- # case "$var" in 00:18:43.267 15:37:13 -- accel/accel.sh@19 -- # IFS=: 00:18:43.267 15:37:13 -- accel/accel.sh@19 -- # read -r var val 00:18:43.267 15:37:13 -- accel/accel.sh@20 -- # val= 00:18:43.267 15:37:13 -- accel/accel.sh@21 -- # case "$var" in 00:18:43.267 15:37:13 -- accel/accel.sh@19 -- # IFS=: 00:18:43.267 15:37:13 -- accel/accel.sh@19 -- # read -r var val 00:18:43.267 15:37:13 -- accel/accel.sh@20 -- # val=software 00:18:43.267 15:37:13 -- accel/accel.sh@21 -- # case "$var" in 00:18:43.267 15:37:13 -- accel/accel.sh@22 -- # accel_module=software 00:18:43.267 15:37:13 -- accel/accel.sh@19 -- # IFS=: 00:18:43.267 15:37:13 -- accel/accel.sh@19 -- # read -r var val 00:18:43.267 15:37:13 -- accel/accel.sh@20 -- # val=32 00:18:43.267 15:37:13 -- accel/accel.sh@21 -- # case "$var" in 00:18:43.267 15:37:13 -- accel/accel.sh@19 -- # IFS=: 00:18:43.267 15:37:13 -- accel/accel.sh@19 -- # read -r var val 00:18:43.267 15:37:13 -- accel/accel.sh@20 -- # val=32 00:18:43.267 15:37:13 -- accel/accel.sh@21 -- # case "$var" in 00:18:43.267 15:37:13 -- accel/accel.sh@19 -- # IFS=: 00:18:43.267 15:37:13 -- accel/accel.sh@19 -- # read -r var val 00:18:43.267 15:37:13 -- accel/accel.sh@20 -- # val=1 00:18:43.267 15:37:13 -- accel/accel.sh@21 -- # case "$var" in 00:18:43.267 15:37:13 -- accel/accel.sh@19 -- # IFS=: 00:18:43.267 15:37:13 -- accel/accel.sh@19 -- # read -r var val 00:18:43.267 15:37:13 -- accel/accel.sh@20 -- # val='1 seconds' 00:18:43.267 15:37:13 -- accel/accel.sh@21 -- # case "$var" in 00:18:43.267 15:37:13 -- accel/accel.sh@19 -- # IFS=: 00:18:43.267 15:37:13 -- accel/accel.sh@19 -- # read -r var val 00:18:43.267 15:37:13 -- accel/accel.sh@20 -- # val=Yes 00:18:43.267 15:37:13 -- accel/accel.sh@21 -- # case "$var" in 00:18:43.267 15:37:13 -- accel/accel.sh@19 -- # IFS=: 00:18:43.267 15:37:13 -- accel/accel.sh@19 -- # read -r var val 00:18:43.267 15:37:13 -- accel/accel.sh@20 -- # val= 00:18:43.267 15:37:13 -- accel/accel.sh@21 -- # case "$var" in 00:18:43.267 15:37:13 -- accel/accel.sh@19 -- # IFS=: 00:18:43.267 15:37:13 -- accel/accel.sh@19 -- # read -r var val 00:18:43.267 15:37:13 -- accel/accel.sh@20 -- # val= 00:18:43.268 15:37:13 -- accel/accel.sh@21 -- # case "$var" in 00:18:43.268 15:37:13 -- accel/accel.sh@19 -- # IFS=: 00:18:43.268 15:37:13 -- accel/accel.sh@19 -- # read -r var val 00:18:44.641 15:37:14 -- accel/accel.sh@20 -- # val= 00:18:44.641 15:37:14 -- accel/accel.sh@21 -- # case "$var" in 00:18:44.641 15:37:14 -- accel/accel.sh@19 -- # IFS=: 00:18:44.641 15:37:14 -- accel/accel.sh@19 -- # read -r var val 00:18:44.641 15:37:14 -- accel/accel.sh@20 -- # val= 00:18:44.641 15:37:14 -- accel/accel.sh@21 -- # case "$var" in 00:18:44.641 15:37:14 -- accel/accel.sh@19 -- # IFS=: 00:18:44.641 15:37:14 -- accel/accel.sh@19 -- # read -r var val 00:18:44.641 15:37:14 -- accel/accel.sh@20 -- # val= 00:18:44.641 15:37:14 -- accel/accel.sh@21 -- # case "$var" in 00:18:44.641 15:37:14 -- accel/accel.sh@19 -- # IFS=: 00:18:44.641 15:37:14 -- accel/accel.sh@19 -- # read -r var val 00:18:44.641 15:37:14 -- accel/accel.sh@20 -- # val= 00:18:44.641 15:37:14 -- accel/accel.sh@21 -- # case "$var" in 00:18:44.641 15:37:14 -- accel/accel.sh@19 -- # IFS=: 00:18:44.641 15:37:14 -- accel/accel.sh@19 -- # read -r var val 00:18:44.641 15:37:14 -- accel/accel.sh@20 -- # val= 00:18:44.641 15:37:14 -- accel/accel.sh@21 -- # case "$var" in 00:18:44.641 15:37:14 -- accel/accel.sh@19 -- # IFS=: 00:18:44.641 15:37:14 -- accel/accel.sh@19 -- # read -r var val 00:18:44.641 15:37:14 -- accel/accel.sh@20 -- # val= 00:18:44.641 15:37:14 -- accel/accel.sh@21 -- # case "$var" in 00:18:44.641 15:37:14 -- accel/accel.sh@19 -- # IFS=: 00:18:44.642 15:37:14 -- accel/accel.sh@19 -- # read -r var val 00:18:44.642 15:37:14 -- accel/accel.sh@27 -- # [[ -n software ]] 00:18:44.642 15:37:14 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:18:44.642 15:37:14 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:44.642 00:18:44.642 real 0m1.538s 00:18:44.642 user 0m1.323s 00:18:44.642 sys 0m0.122s 00:18:44.642 15:37:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:44.642 ************************************ 00:18:44.642 END TEST accel_xor 00:18:44.642 ************************************ 00:18:44.642 15:37:14 -- common/autotest_common.sh@10 -- # set +x 00:18:44.642 15:37:14 -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:18:44.642 15:37:14 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:18:44.642 15:37:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:44.642 15:37:14 -- common/autotest_common.sh@10 -- # set +x 00:18:44.642 ************************************ 00:18:44.642 START TEST accel_xor 00:18:44.642 ************************************ 00:18:44.642 15:37:14 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w xor -y -x 3 00:18:44.642 15:37:14 -- accel/accel.sh@16 -- # local accel_opc 00:18:44.642 15:37:14 -- accel/accel.sh@17 -- # local accel_module 00:18:44.642 15:37:14 -- accel/accel.sh@19 -- # IFS=: 00:18:44.642 15:37:14 -- accel/accel.sh@19 -- # read -r var val 00:18:44.642 15:37:14 -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:18:44.642 15:37:14 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:18:44.642 15:37:14 -- accel/accel.sh@12 -- # build_accel_config 00:18:44.642 15:37:14 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:44.642 15:37:14 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:44.642 15:37:14 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:44.642 15:37:14 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:44.642 15:37:14 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:44.642 15:37:14 -- accel/accel.sh@40 -- # local IFS=, 00:18:44.642 15:37:14 -- accel/accel.sh@41 -- # jq -r . 00:18:44.642 [2024-04-26 15:37:14.791993] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:18:44.642 [2024-04-26 15:37:14.792096] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63821 ] 00:18:44.642 [2024-04-26 15:37:14.929540] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:44.901 [2024-04-26 15:37:15.060422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:44.901 15:37:15 -- accel/accel.sh@20 -- # val= 00:18:44.901 15:37:15 -- accel/accel.sh@21 -- # case "$var" in 00:18:44.901 15:37:15 -- accel/accel.sh@19 -- # IFS=: 00:18:44.901 15:37:15 -- accel/accel.sh@19 -- # read -r var val 00:18:44.901 15:37:15 -- accel/accel.sh@20 -- # val= 00:18:44.901 15:37:15 -- accel/accel.sh@21 -- # case "$var" in 00:18:44.901 15:37:15 -- accel/accel.sh@19 -- # IFS=: 00:18:44.901 15:37:15 -- accel/accel.sh@19 -- # read -r var val 00:18:44.901 15:37:15 -- accel/accel.sh@20 -- # val=0x1 00:18:44.901 15:37:15 -- accel/accel.sh@21 -- # case "$var" in 00:18:44.901 15:37:15 -- accel/accel.sh@19 -- # IFS=: 00:18:44.901 15:37:15 -- accel/accel.sh@19 -- # read -r var val 00:18:44.901 15:37:15 -- accel/accel.sh@20 -- # val= 00:18:44.901 15:37:15 -- accel/accel.sh@21 -- # case "$var" in 00:18:44.901 15:37:15 -- accel/accel.sh@19 -- # IFS=: 00:18:44.901 15:37:15 -- accel/accel.sh@19 -- # read -r var val 00:18:44.901 15:37:15 -- accel/accel.sh@20 -- # val= 00:18:44.901 15:37:15 -- accel/accel.sh@21 -- # case "$var" in 00:18:44.901 15:37:15 -- accel/accel.sh@19 -- # IFS=: 00:18:44.901 15:37:15 -- accel/accel.sh@19 -- # read -r var val 00:18:44.901 15:37:15 -- accel/accel.sh@20 -- # val=xor 00:18:44.901 15:37:15 -- accel/accel.sh@21 -- # case "$var" in 00:18:44.901 15:37:15 -- accel/accel.sh@23 -- # accel_opc=xor 00:18:44.901 15:37:15 -- accel/accel.sh@19 -- # IFS=: 00:18:44.901 15:37:15 -- accel/accel.sh@19 -- # read -r var val 00:18:44.901 15:37:15 -- accel/accel.sh@20 -- # val=3 00:18:44.901 15:37:15 -- accel/accel.sh@21 -- # case "$var" in 00:18:44.901 15:37:15 -- accel/accel.sh@19 -- # IFS=: 00:18:44.901 15:37:15 -- accel/accel.sh@19 -- # read -r var val 00:18:44.901 15:37:15 -- accel/accel.sh@20 -- # val='4096 bytes' 00:18:44.901 15:37:15 -- accel/accel.sh@21 -- # case "$var" in 00:18:44.901 15:37:15 -- accel/accel.sh@19 -- # IFS=: 00:18:44.901 15:37:15 -- accel/accel.sh@19 -- # read -r var val 00:18:44.901 15:37:15 -- accel/accel.sh@20 -- # val= 00:18:44.901 15:37:15 -- accel/accel.sh@21 -- # case "$var" in 00:18:44.901 15:37:15 -- accel/accel.sh@19 -- # IFS=: 00:18:44.901 15:37:15 -- accel/accel.sh@19 -- # read -r var val 00:18:44.901 15:37:15 -- accel/accel.sh@20 -- # val=software 00:18:44.901 15:37:15 -- accel/accel.sh@21 -- # case "$var" in 00:18:44.901 15:37:15 -- accel/accel.sh@22 -- # accel_module=software 00:18:44.901 15:37:15 -- accel/accel.sh@19 -- # IFS=: 00:18:44.901 15:37:15 -- accel/accel.sh@19 -- # read -r var val 00:18:44.901 15:37:15 -- accel/accel.sh@20 -- # val=32 00:18:44.901 15:37:15 -- accel/accel.sh@21 -- # case "$var" in 00:18:44.901 15:37:15 -- accel/accel.sh@19 -- # IFS=: 00:18:44.901 15:37:15 -- accel/accel.sh@19 -- # read -r var val 00:18:44.901 15:37:15 -- accel/accel.sh@20 -- # val=32 00:18:44.901 15:37:15 -- accel/accel.sh@21 -- # case "$var" in 00:18:44.901 15:37:15 -- accel/accel.sh@19 -- # IFS=: 00:18:44.901 15:37:15 -- accel/accel.sh@19 -- # read -r var val 00:18:44.901 15:37:15 -- accel/accel.sh@20 -- # val=1 00:18:44.901 15:37:15 -- accel/accel.sh@21 -- # case "$var" in 00:18:44.901 15:37:15 -- accel/accel.sh@19 -- # IFS=: 00:18:44.901 15:37:15 -- accel/accel.sh@19 -- # read -r var val 00:18:44.901 15:37:15 -- accel/accel.sh@20 -- # val='1 seconds' 00:18:44.901 15:37:15 -- accel/accel.sh@21 -- # case "$var" in 00:18:44.901 15:37:15 -- accel/accel.sh@19 -- # IFS=: 00:18:44.901 15:37:15 -- accel/accel.sh@19 -- # read -r var val 00:18:44.901 15:37:15 -- accel/accel.sh@20 -- # val=Yes 00:18:44.901 15:37:15 -- accel/accel.sh@21 -- # case "$var" in 00:18:44.901 15:37:15 -- accel/accel.sh@19 -- # IFS=: 00:18:44.901 15:37:15 -- accel/accel.sh@19 -- # read -r var val 00:18:44.901 15:37:15 -- accel/accel.sh@20 -- # val= 00:18:44.901 15:37:15 -- accel/accel.sh@21 -- # case "$var" in 00:18:44.901 15:37:15 -- accel/accel.sh@19 -- # IFS=: 00:18:44.901 15:37:15 -- accel/accel.sh@19 -- # read -r var val 00:18:44.901 15:37:15 -- accel/accel.sh@20 -- # val= 00:18:44.901 15:37:15 -- accel/accel.sh@21 -- # case "$var" in 00:18:44.901 15:37:15 -- accel/accel.sh@19 -- # IFS=: 00:18:44.901 15:37:15 -- accel/accel.sh@19 -- # read -r var val 00:18:46.276 15:37:16 -- accel/accel.sh@20 -- # val= 00:18:46.276 15:37:16 -- accel/accel.sh@21 -- # case "$var" in 00:18:46.276 15:37:16 -- accel/accel.sh@19 -- # IFS=: 00:18:46.276 15:37:16 -- accel/accel.sh@19 -- # read -r var val 00:18:46.276 15:37:16 -- accel/accel.sh@20 -- # val= 00:18:46.276 15:37:16 -- accel/accel.sh@21 -- # case "$var" in 00:18:46.276 15:37:16 -- accel/accel.sh@19 -- # IFS=: 00:18:46.276 15:37:16 -- accel/accel.sh@19 -- # read -r var val 00:18:46.276 15:37:16 -- accel/accel.sh@20 -- # val= 00:18:46.276 15:37:16 -- accel/accel.sh@21 -- # case "$var" in 00:18:46.276 15:37:16 -- accel/accel.sh@19 -- # IFS=: 00:18:46.276 15:37:16 -- accel/accel.sh@19 -- # read -r var val 00:18:46.276 15:37:16 -- accel/accel.sh@20 -- # val= 00:18:46.276 15:37:16 -- accel/accel.sh@21 -- # case "$var" in 00:18:46.276 15:37:16 -- accel/accel.sh@19 -- # IFS=: 00:18:46.276 15:37:16 -- accel/accel.sh@19 -- # read -r var val 00:18:46.276 15:37:16 -- accel/accel.sh@20 -- # val= 00:18:46.276 15:37:16 -- accel/accel.sh@21 -- # case "$var" in 00:18:46.276 15:37:16 -- accel/accel.sh@19 -- # IFS=: 00:18:46.276 15:37:16 -- accel/accel.sh@19 -- # read -r var val 00:18:46.276 15:37:16 -- accel/accel.sh@20 -- # val= 00:18:46.276 15:37:16 -- accel/accel.sh@21 -- # case "$var" in 00:18:46.277 15:37:16 -- accel/accel.sh@19 -- # IFS=: 00:18:46.277 15:37:16 -- accel/accel.sh@19 -- # read -r var val 00:18:46.277 15:37:16 -- accel/accel.sh@27 -- # [[ -n software ]] 00:18:46.277 15:37:16 -- accel/accel.sh@27 -- # [[ -n xor ]] 00:18:46.277 15:37:16 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:46.277 00:18:46.277 real 0m1.549s 00:18:46.277 user 0m1.335s 00:18:46.277 sys 0m0.118s 00:18:46.277 15:37:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:46.277 ************************************ 00:18:46.277 END TEST accel_xor 00:18:46.277 ************************************ 00:18:46.277 15:37:16 -- common/autotest_common.sh@10 -- # set +x 00:18:46.277 15:37:16 -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:18:46.277 15:37:16 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:18:46.277 15:37:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:46.277 15:37:16 -- common/autotest_common.sh@10 -- # set +x 00:18:46.277 ************************************ 00:18:46.277 START TEST accel_dif_verify 00:18:46.277 ************************************ 00:18:46.277 15:37:16 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_verify 00:18:46.277 15:37:16 -- accel/accel.sh@16 -- # local accel_opc 00:18:46.277 15:37:16 -- accel/accel.sh@17 -- # local accel_module 00:18:46.277 15:37:16 -- accel/accel.sh@19 -- # IFS=: 00:18:46.277 15:37:16 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:18:46.277 15:37:16 -- accel/accel.sh@19 -- # read -r var val 00:18:46.277 15:37:16 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:18:46.277 15:37:16 -- accel/accel.sh@12 -- # build_accel_config 00:18:46.277 15:37:16 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:46.277 15:37:16 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:46.277 15:37:16 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:46.277 15:37:16 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:46.277 15:37:16 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:46.277 15:37:16 -- accel/accel.sh@40 -- # local IFS=, 00:18:46.277 15:37:16 -- accel/accel.sh@41 -- # jq -r . 00:18:46.277 [2024-04-26 15:37:16.453062] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:18:46.277 [2024-04-26 15:37:16.453203] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63865 ] 00:18:46.535 [2024-04-26 15:37:16.589350] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:46.535 [2024-04-26 15:37:16.708370] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:46.535 15:37:16 -- accel/accel.sh@20 -- # val= 00:18:46.535 15:37:16 -- accel/accel.sh@21 -- # case "$var" in 00:18:46.535 15:37:16 -- accel/accel.sh@19 -- # IFS=: 00:18:46.535 15:37:16 -- accel/accel.sh@19 -- # read -r var val 00:18:46.535 15:37:16 -- accel/accel.sh@20 -- # val= 00:18:46.535 15:37:16 -- accel/accel.sh@21 -- # case "$var" in 00:18:46.535 15:37:16 -- accel/accel.sh@19 -- # IFS=: 00:18:46.535 15:37:16 -- accel/accel.sh@19 -- # read -r var val 00:18:46.535 15:37:16 -- accel/accel.sh@20 -- # val=0x1 00:18:46.535 15:37:16 -- accel/accel.sh@21 -- # case "$var" in 00:18:46.535 15:37:16 -- accel/accel.sh@19 -- # IFS=: 00:18:46.535 15:37:16 -- accel/accel.sh@19 -- # read -r var val 00:18:46.535 15:37:16 -- accel/accel.sh@20 -- # val= 00:18:46.535 15:37:16 -- accel/accel.sh@21 -- # case "$var" in 00:18:46.535 15:37:16 -- accel/accel.sh@19 -- # IFS=: 00:18:46.535 15:37:16 -- accel/accel.sh@19 -- # read -r var val 00:18:46.535 15:37:16 -- accel/accel.sh@20 -- # val= 00:18:46.535 15:37:16 -- accel/accel.sh@21 -- # case "$var" in 00:18:46.535 15:37:16 -- accel/accel.sh@19 -- # IFS=: 00:18:46.535 15:37:16 -- accel/accel.sh@19 -- # read -r var val 00:18:46.535 15:37:16 -- accel/accel.sh@20 -- # val=dif_verify 00:18:46.535 15:37:16 -- accel/accel.sh@21 -- # case "$var" in 00:18:46.535 15:37:16 -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:18:46.535 15:37:16 -- accel/accel.sh@19 -- # IFS=: 00:18:46.535 15:37:16 -- accel/accel.sh@19 -- # read -r var val 00:18:46.535 15:37:16 -- accel/accel.sh@20 -- # val='4096 bytes' 00:18:46.535 15:37:16 -- accel/accel.sh@21 -- # case "$var" in 00:18:46.535 15:37:16 -- accel/accel.sh@19 -- # IFS=: 00:18:46.535 15:37:16 -- accel/accel.sh@19 -- # read -r var val 00:18:46.535 15:37:16 -- accel/accel.sh@20 -- # val='4096 bytes' 00:18:46.535 15:37:16 -- accel/accel.sh@21 -- # case "$var" in 00:18:46.535 15:37:16 -- accel/accel.sh@19 -- # IFS=: 00:18:46.535 15:37:16 -- accel/accel.sh@19 -- # read -r var val 00:18:46.535 15:37:16 -- accel/accel.sh@20 -- # val='512 bytes' 00:18:46.535 15:37:16 -- accel/accel.sh@21 -- # case "$var" in 00:18:46.535 15:37:16 -- accel/accel.sh@19 -- # IFS=: 00:18:46.535 15:37:16 -- accel/accel.sh@19 -- # read -r var val 00:18:46.535 15:37:16 -- accel/accel.sh@20 -- # val='8 bytes' 00:18:46.535 15:37:16 -- accel/accel.sh@21 -- # case "$var" in 00:18:46.535 15:37:16 -- accel/accel.sh@19 -- # IFS=: 00:18:46.535 15:37:16 -- accel/accel.sh@19 -- # read -r var val 00:18:46.535 15:37:16 -- accel/accel.sh@20 -- # val= 00:18:46.535 15:37:16 -- accel/accel.sh@21 -- # case "$var" in 00:18:46.535 15:37:16 -- accel/accel.sh@19 -- # IFS=: 00:18:46.535 15:37:16 -- accel/accel.sh@19 -- # read -r var val 00:18:46.535 15:37:16 -- accel/accel.sh@20 -- # val=software 00:18:46.535 15:37:16 -- accel/accel.sh@21 -- # case "$var" in 00:18:46.535 15:37:16 -- accel/accel.sh@22 -- # accel_module=software 00:18:46.535 15:37:16 -- accel/accel.sh@19 -- # IFS=: 00:18:46.535 15:37:16 -- accel/accel.sh@19 -- # read -r var val 00:18:46.535 15:37:16 -- accel/accel.sh@20 -- # val=32 00:18:46.535 15:37:16 -- accel/accel.sh@21 -- # case "$var" in 00:18:46.535 15:37:16 -- accel/accel.sh@19 -- # IFS=: 00:18:46.535 15:37:16 -- accel/accel.sh@19 -- # read -r var val 00:18:46.535 15:37:16 -- accel/accel.sh@20 -- # val=32 00:18:46.535 15:37:16 -- accel/accel.sh@21 -- # case "$var" in 00:18:46.535 15:37:16 -- accel/accel.sh@19 -- # IFS=: 00:18:46.535 15:37:16 -- accel/accel.sh@19 -- # read -r var val 00:18:46.535 15:37:16 -- accel/accel.sh@20 -- # val=1 00:18:46.535 15:37:16 -- accel/accel.sh@21 -- # case "$var" in 00:18:46.535 15:37:16 -- accel/accel.sh@19 -- # IFS=: 00:18:46.535 15:37:16 -- accel/accel.sh@19 -- # read -r var val 00:18:46.535 15:37:16 -- accel/accel.sh@20 -- # val='1 seconds' 00:18:46.535 15:37:16 -- accel/accel.sh@21 -- # case "$var" in 00:18:46.535 15:37:16 -- accel/accel.sh@19 -- # IFS=: 00:18:46.535 15:37:16 -- accel/accel.sh@19 -- # read -r var val 00:18:46.535 15:37:16 -- accel/accel.sh@20 -- # val=No 00:18:46.535 15:37:16 -- accel/accel.sh@21 -- # case "$var" in 00:18:46.535 15:37:16 -- accel/accel.sh@19 -- # IFS=: 00:18:46.535 15:37:16 -- accel/accel.sh@19 -- # read -r var val 00:18:46.535 15:37:16 -- accel/accel.sh@20 -- # val= 00:18:46.535 15:37:16 -- accel/accel.sh@21 -- # case "$var" in 00:18:46.535 15:37:16 -- accel/accel.sh@19 -- # IFS=: 00:18:46.535 15:37:16 -- accel/accel.sh@19 -- # read -r var val 00:18:46.535 15:37:16 -- accel/accel.sh@20 -- # val= 00:18:46.535 15:37:16 -- accel/accel.sh@21 -- # case "$var" in 00:18:46.535 15:37:16 -- accel/accel.sh@19 -- # IFS=: 00:18:46.535 15:37:16 -- accel/accel.sh@19 -- # read -r var val 00:18:47.908 15:37:17 -- accel/accel.sh@20 -- # val= 00:18:47.908 15:37:17 -- accel/accel.sh@21 -- # case "$var" in 00:18:47.908 15:37:17 -- accel/accel.sh@19 -- # IFS=: 00:18:47.908 15:37:17 -- accel/accel.sh@19 -- # read -r var val 00:18:47.908 15:37:17 -- accel/accel.sh@20 -- # val= 00:18:47.908 15:37:17 -- accel/accel.sh@21 -- # case "$var" in 00:18:47.908 15:37:17 -- accel/accel.sh@19 -- # IFS=: 00:18:47.908 15:37:17 -- accel/accel.sh@19 -- # read -r var val 00:18:47.908 15:37:17 -- accel/accel.sh@20 -- # val= 00:18:47.908 15:37:17 -- accel/accel.sh@21 -- # case "$var" in 00:18:47.908 15:37:17 -- accel/accel.sh@19 -- # IFS=: 00:18:47.908 15:37:17 -- accel/accel.sh@19 -- # read -r var val 00:18:47.908 15:37:17 -- accel/accel.sh@20 -- # val= 00:18:47.908 15:37:17 -- accel/accel.sh@21 -- # case "$var" in 00:18:47.908 15:37:17 -- accel/accel.sh@19 -- # IFS=: 00:18:47.908 15:37:17 -- accel/accel.sh@19 -- # read -r var val 00:18:47.908 15:37:17 -- accel/accel.sh@20 -- # val= 00:18:47.908 15:37:17 -- accel/accel.sh@21 -- # case "$var" in 00:18:47.908 15:37:17 -- accel/accel.sh@19 -- # IFS=: 00:18:47.908 15:37:17 -- accel/accel.sh@19 -- # read -r var val 00:18:47.908 15:37:17 -- accel/accel.sh@20 -- # val= 00:18:47.908 15:37:17 -- accel/accel.sh@21 -- # case "$var" in 00:18:47.908 15:37:17 -- accel/accel.sh@19 -- # IFS=: 00:18:47.908 15:37:17 -- accel/accel.sh@19 -- # read -r var val 00:18:47.909 ************************************ 00:18:47.909 END TEST accel_dif_verify 00:18:47.909 ************************************ 00:18:47.909 15:37:17 -- accel/accel.sh@27 -- # [[ -n software ]] 00:18:47.909 15:37:17 -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:18:47.909 15:37:17 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:47.909 00:18:47.909 real 0m1.527s 00:18:47.909 user 0m1.324s 00:18:47.909 sys 0m0.111s 00:18:47.909 15:37:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:47.909 15:37:17 -- common/autotest_common.sh@10 -- # set +x 00:18:47.909 15:37:17 -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:18:47.909 15:37:17 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:18:47.909 15:37:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:47.909 15:37:17 -- common/autotest_common.sh@10 -- # set +x 00:18:47.909 ************************************ 00:18:47.909 START TEST accel_dif_generate 00:18:47.909 ************************************ 00:18:47.909 15:37:18 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate 00:18:47.909 15:37:18 -- accel/accel.sh@16 -- # local accel_opc 00:18:47.909 15:37:18 -- accel/accel.sh@17 -- # local accel_module 00:18:47.909 15:37:18 -- accel/accel.sh@19 -- # IFS=: 00:18:47.909 15:37:18 -- accel/accel.sh@19 -- # read -r var val 00:18:47.909 15:37:18 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:18:47.909 15:37:18 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:18:47.909 15:37:18 -- accel/accel.sh@12 -- # build_accel_config 00:18:47.909 15:37:18 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:47.909 15:37:18 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:47.909 15:37:18 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:47.909 15:37:18 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:47.909 15:37:18 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:47.909 15:37:18 -- accel/accel.sh@40 -- # local IFS=, 00:18:47.909 15:37:18 -- accel/accel.sh@41 -- # jq -r . 00:18:47.909 [2024-04-26 15:37:18.092526] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:18:47.909 [2024-04-26 15:37:18.092624] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63904 ] 00:18:48.167 [2024-04-26 15:37:18.221453] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:48.167 [2024-04-26 15:37:18.336895] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:48.167 15:37:18 -- accel/accel.sh@20 -- # val= 00:18:48.167 15:37:18 -- accel/accel.sh@21 -- # case "$var" in 00:18:48.167 15:37:18 -- accel/accel.sh@19 -- # IFS=: 00:18:48.167 15:37:18 -- accel/accel.sh@19 -- # read -r var val 00:18:48.167 15:37:18 -- accel/accel.sh@20 -- # val= 00:18:48.167 15:37:18 -- accel/accel.sh@21 -- # case "$var" in 00:18:48.167 15:37:18 -- accel/accel.sh@19 -- # IFS=: 00:18:48.167 15:37:18 -- accel/accel.sh@19 -- # read -r var val 00:18:48.167 15:37:18 -- accel/accel.sh@20 -- # val=0x1 00:18:48.167 15:37:18 -- accel/accel.sh@21 -- # case "$var" in 00:18:48.167 15:37:18 -- accel/accel.sh@19 -- # IFS=: 00:18:48.167 15:37:18 -- accel/accel.sh@19 -- # read -r var val 00:18:48.167 15:37:18 -- accel/accel.sh@20 -- # val= 00:18:48.167 15:37:18 -- accel/accel.sh@21 -- # case "$var" in 00:18:48.167 15:37:18 -- accel/accel.sh@19 -- # IFS=: 00:18:48.167 15:37:18 -- accel/accel.sh@19 -- # read -r var val 00:18:48.167 15:37:18 -- accel/accel.sh@20 -- # val= 00:18:48.167 15:37:18 -- accel/accel.sh@21 -- # case "$var" in 00:18:48.167 15:37:18 -- accel/accel.sh@19 -- # IFS=: 00:18:48.167 15:37:18 -- accel/accel.sh@19 -- # read -r var val 00:18:48.167 15:37:18 -- accel/accel.sh@20 -- # val=dif_generate 00:18:48.167 15:37:18 -- accel/accel.sh@21 -- # case "$var" in 00:18:48.167 15:37:18 -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:18:48.167 15:37:18 -- accel/accel.sh@19 -- # IFS=: 00:18:48.167 15:37:18 -- accel/accel.sh@19 -- # read -r var val 00:18:48.167 15:37:18 -- accel/accel.sh@20 -- # val='4096 bytes' 00:18:48.167 15:37:18 -- accel/accel.sh@21 -- # case "$var" in 00:18:48.167 15:37:18 -- accel/accel.sh@19 -- # IFS=: 00:18:48.167 15:37:18 -- accel/accel.sh@19 -- # read -r var val 00:18:48.167 15:37:18 -- accel/accel.sh@20 -- # val='4096 bytes' 00:18:48.167 15:37:18 -- accel/accel.sh@21 -- # case "$var" in 00:18:48.167 15:37:18 -- accel/accel.sh@19 -- # IFS=: 00:18:48.167 15:37:18 -- accel/accel.sh@19 -- # read -r var val 00:18:48.167 15:37:18 -- accel/accel.sh@20 -- # val='512 bytes' 00:18:48.167 15:37:18 -- accel/accel.sh@21 -- # case "$var" in 00:18:48.167 15:37:18 -- accel/accel.sh@19 -- # IFS=: 00:18:48.167 15:37:18 -- accel/accel.sh@19 -- # read -r var val 00:18:48.167 15:37:18 -- accel/accel.sh@20 -- # val='8 bytes' 00:18:48.167 15:37:18 -- accel/accel.sh@21 -- # case "$var" in 00:18:48.167 15:37:18 -- accel/accel.sh@19 -- # IFS=: 00:18:48.167 15:37:18 -- accel/accel.sh@19 -- # read -r var val 00:18:48.167 15:37:18 -- accel/accel.sh@20 -- # val= 00:18:48.167 15:37:18 -- accel/accel.sh@21 -- # case "$var" in 00:18:48.167 15:37:18 -- accel/accel.sh@19 -- # IFS=: 00:18:48.167 15:37:18 -- accel/accel.sh@19 -- # read -r var val 00:18:48.167 15:37:18 -- accel/accel.sh@20 -- # val=software 00:18:48.167 15:37:18 -- accel/accel.sh@21 -- # case "$var" in 00:18:48.167 15:37:18 -- accel/accel.sh@22 -- # accel_module=software 00:18:48.167 15:37:18 -- accel/accel.sh@19 -- # IFS=: 00:18:48.167 15:37:18 -- accel/accel.sh@19 -- # read -r var val 00:18:48.167 15:37:18 -- accel/accel.sh@20 -- # val=32 00:18:48.167 15:37:18 -- accel/accel.sh@21 -- # case "$var" in 00:18:48.167 15:37:18 -- accel/accel.sh@19 -- # IFS=: 00:18:48.167 15:37:18 -- accel/accel.sh@19 -- # read -r var val 00:18:48.167 15:37:18 -- accel/accel.sh@20 -- # val=32 00:18:48.167 15:37:18 -- accel/accel.sh@21 -- # case "$var" in 00:18:48.167 15:37:18 -- accel/accel.sh@19 -- # IFS=: 00:18:48.167 15:37:18 -- accel/accel.sh@19 -- # read -r var val 00:18:48.167 15:37:18 -- accel/accel.sh@20 -- # val=1 00:18:48.167 15:37:18 -- accel/accel.sh@21 -- # case "$var" in 00:18:48.167 15:37:18 -- accel/accel.sh@19 -- # IFS=: 00:18:48.167 15:37:18 -- accel/accel.sh@19 -- # read -r var val 00:18:48.167 15:37:18 -- accel/accel.sh@20 -- # val='1 seconds' 00:18:48.167 15:37:18 -- accel/accel.sh@21 -- # case "$var" in 00:18:48.167 15:37:18 -- accel/accel.sh@19 -- # IFS=: 00:18:48.167 15:37:18 -- accel/accel.sh@19 -- # read -r var val 00:18:48.167 15:37:18 -- accel/accel.sh@20 -- # val=No 00:18:48.167 15:37:18 -- accel/accel.sh@21 -- # case "$var" in 00:18:48.167 15:37:18 -- accel/accel.sh@19 -- # IFS=: 00:18:48.167 15:37:18 -- accel/accel.sh@19 -- # read -r var val 00:18:48.167 15:37:18 -- accel/accel.sh@20 -- # val= 00:18:48.167 15:37:18 -- accel/accel.sh@21 -- # case "$var" in 00:18:48.167 15:37:18 -- accel/accel.sh@19 -- # IFS=: 00:18:48.167 15:37:18 -- accel/accel.sh@19 -- # read -r var val 00:18:48.167 15:37:18 -- accel/accel.sh@20 -- # val= 00:18:48.167 15:37:18 -- accel/accel.sh@21 -- # case "$var" in 00:18:48.167 15:37:18 -- accel/accel.sh@19 -- # IFS=: 00:18:48.167 15:37:18 -- accel/accel.sh@19 -- # read -r var val 00:18:49.540 15:37:19 -- accel/accel.sh@20 -- # val= 00:18:49.540 15:37:19 -- accel/accel.sh@21 -- # case "$var" in 00:18:49.540 15:37:19 -- accel/accel.sh@19 -- # IFS=: 00:18:49.540 15:37:19 -- accel/accel.sh@19 -- # read -r var val 00:18:49.540 15:37:19 -- accel/accel.sh@20 -- # val= 00:18:49.540 15:37:19 -- accel/accel.sh@21 -- # case "$var" in 00:18:49.540 15:37:19 -- accel/accel.sh@19 -- # IFS=: 00:18:49.540 15:37:19 -- accel/accel.sh@19 -- # read -r var val 00:18:49.540 15:37:19 -- accel/accel.sh@20 -- # val= 00:18:49.540 15:37:19 -- accel/accel.sh@21 -- # case "$var" in 00:18:49.540 15:37:19 -- accel/accel.sh@19 -- # IFS=: 00:18:49.540 15:37:19 -- accel/accel.sh@19 -- # read -r var val 00:18:49.540 15:37:19 -- accel/accel.sh@20 -- # val= 00:18:49.540 15:37:19 -- accel/accel.sh@21 -- # case "$var" in 00:18:49.540 15:37:19 -- accel/accel.sh@19 -- # IFS=: 00:18:49.540 15:37:19 -- accel/accel.sh@19 -- # read -r var val 00:18:49.540 15:37:19 -- accel/accel.sh@20 -- # val= 00:18:49.540 15:37:19 -- accel/accel.sh@21 -- # case "$var" in 00:18:49.540 15:37:19 -- accel/accel.sh@19 -- # IFS=: 00:18:49.540 15:37:19 -- accel/accel.sh@19 -- # read -r var val 00:18:49.540 15:37:19 -- accel/accel.sh@20 -- # val= 00:18:49.540 15:37:19 -- accel/accel.sh@21 -- # case "$var" in 00:18:49.540 15:37:19 -- accel/accel.sh@19 -- # IFS=: 00:18:49.540 15:37:19 -- accel/accel.sh@19 -- # read -r var val 00:18:49.540 15:37:19 -- accel/accel.sh@27 -- # [[ -n software ]] 00:18:49.540 15:37:19 -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:18:49.540 15:37:19 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:49.540 00:18:49.540 real 0m1.524s 00:18:49.540 user 0m1.315s 00:18:49.540 sys 0m0.114s 00:18:49.540 15:37:19 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:49.540 15:37:19 -- common/autotest_common.sh@10 -- # set +x 00:18:49.540 ************************************ 00:18:49.540 END TEST accel_dif_generate 00:18:49.540 ************************************ 00:18:49.540 15:37:19 -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:18:49.540 15:37:19 -- common/autotest_common.sh@1087 -- # '[' 6 -le 1 ']' 00:18:49.540 15:37:19 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:49.540 15:37:19 -- common/autotest_common.sh@10 -- # set +x 00:18:49.540 ************************************ 00:18:49.540 START TEST accel_dif_generate_copy 00:18:49.540 ************************************ 00:18:49.540 15:37:19 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w dif_generate_copy 00:18:49.540 15:37:19 -- accel/accel.sh@16 -- # local accel_opc 00:18:49.540 15:37:19 -- accel/accel.sh@17 -- # local accel_module 00:18:49.540 15:37:19 -- accel/accel.sh@19 -- # IFS=: 00:18:49.540 15:37:19 -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:18:49.540 15:37:19 -- accel/accel.sh@19 -- # read -r var val 00:18:49.540 15:37:19 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:18:49.540 15:37:19 -- accel/accel.sh@12 -- # build_accel_config 00:18:49.540 15:37:19 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:49.540 15:37:19 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:49.540 15:37:19 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:49.540 15:37:19 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:49.540 15:37:19 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:49.540 15:37:19 -- accel/accel.sh@40 -- # local IFS=, 00:18:49.540 15:37:19 -- accel/accel.sh@41 -- # jq -r . 00:18:49.540 [2024-04-26 15:37:19.728336] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:18:49.540 [2024-04-26 15:37:19.728447] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63942 ] 00:18:49.799 [2024-04-26 15:37:19.865616] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.799 [2024-04-26 15:37:19.981628] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.799 15:37:20 -- accel/accel.sh@20 -- # val= 00:18:49.799 15:37:20 -- accel/accel.sh@21 -- # case "$var" in 00:18:49.799 15:37:20 -- accel/accel.sh@19 -- # IFS=: 00:18:49.799 15:37:20 -- accel/accel.sh@19 -- # read -r var val 00:18:49.799 15:37:20 -- accel/accel.sh@20 -- # val= 00:18:49.799 15:37:20 -- accel/accel.sh@21 -- # case "$var" in 00:18:49.799 15:37:20 -- accel/accel.sh@19 -- # IFS=: 00:18:49.799 15:37:20 -- accel/accel.sh@19 -- # read -r var val 00:18:49.799 15:37:20 -- accel/accel.sh@20 -- # val=0x1 00:18:49.799 15:37:20 -- accel/accel.sh@21 -- # case "$var" in 00:18:49.799 15:37:20 -- accel/accel.sh@19 -- # IFS=: 00:18:49.799 15:37:20 -- accel/accel.sh@19 -- # read -r var val 00:18:49.799 15:37:20 -- accel/accel.sh@20 -- # val= 00:18:49.799 15:37:20 -- accel/accel.sh@21 -- # case "$var" in 00:18:49.799 15:37:20 -- accel/accel.sh@19 -- # IFS=: 00:18:49.799 15:37:20 -- accel/accel.sh@19 -- # read -r var val 00:18:49.799 15:37:20 -- accel/accel.sh@20 -- # val= 00:18:49.799 15:37:20 -- accel/accel.sh@21 -- # case "$var" in 00:18:49.799 15:37:20 -- accel/accel.sh@19 -- # IFS=: 00:18:49.799 15:37:20 -- accel/accel.sh@19 -- # read -r var val 00:18:49.799 15:37:20 -- accel/accel.sh@20 -- # val=dif_generate_copy 00:18:49.799 15:37:20 -- accel/accel.sh@21 -- # case "$var" in 00:18:49.799 15:37:20 -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:18:49.799 15:37:20 -- accel/accel.sh@19 -- # IFS=: 00:18:49.799 15:37:20 -- accel/accel.sh@19 -- # read -r var val 00:18:49.799 15:37:20 -- accel/accel.sh@20 -- # val='4096 bytes' 00:18:49.799 15:37:20 -- accel/accel.sh@21 -- # case "$var" in 00:18:49.799 15:37:20 -- accel/accel.sh@19 -- # IFS=: 00:18:49.799 15:37:20 -- accel/accel.sh@19 -- # read -r var val 00:18:49.799 15:37:20 -- accel/accel.sh@20 -- # val='4096 bytes' 00:18:49.799 15:37:20 -- accel/accel.sh@21 -- # case "$var" in 00:18:49.799 15:37:20 -- accel/accel.sh@19 -- # IFS=: 00:18:49.799 15:37:20 -- accel/accel.sh@19 -- # read -r var val 00:18:49.799 15:37:20 -- accel/accel.sh@20 -- # val= 00:18:49.799 15:37:20 -- accel/accel.sh@21 -- # case "$var" in 00:18:49.799 15:37:20 -- accel/accel.sh@19 -- # IFS=: 00:18:49.799 15:37:20 -- accel/accel.sh@19 -- # read -r var val 00:18:49.799 15:37:20 -- accel/accel.sh@20 -- # val=software 00:18:49.799 15:37:20 -- accel/accel.sh@21 -- # case "$var" in 00:18:49.799 15:37:20 -- accel/accel.sh@22 -- # accel_module=software 00:18:49.799 15:37:20 -- accel/accel.sh@19 -- # IFS=: 00:18:49.799 15:37:20 -- accel/accel.sh@19 -- # read -r var val 00:18:49.799 15:37:20 -- accel/accel.sh@20 -- # val=32 00:18:49.799 15:37:20 -- accel/accel.sh@21 -- # case "$var" in 00:18:49.799 15:37:20 -- accel/accel.sh@19 -- # IFS=: 00:18:49.799 15:37:20 -- accel/accel.sh@19 -- # read -r var val 00:18:49.799 15:37:20 -- accel/accel.sh@20 -- # val=32 00:18:49.799 15:37:20 -- accel/accel.sh@21 -- # case "$var" in 00:18:49.799 15:37:20 -- accel/accel.sh@19 -- # IFS=: 00:18:49.799 15:37:20 -- accel/accel.sh@19 -- # read -r var val 00:18:49.799 15:37:20 -- accel/accel.sh@20 -- # val=1 00:18:49.799 15:37:20 -- accel/accel.sh@21 -- # case "$var" in 00:18:49.799 15:37:20 -- accel/accel.sh@19 -- # IFS=: 00:18:49.799 15:37:20 -- accel/accel.sh@19 -- # read -r var val 00:18:49.799 15:37:20 -- accel/accel.sh@20 -- # val='1 seconds' 00:18:49.799 15:37:20 -- accel/accel.sh@21 -- # case "$var" in 00:18:49.799 15:37:20 -- accel/accel.sh@19 -- # IFS=: 00:18:49.799 15:37:20 -- accel/accel.sh@19 -- # read -r var val 00:18:49.799 15:37:20 -- accel/accel.sh@20 -- # val=No 00:18:49.799 15:37:20 -- accel/accel.sh@21 -- # case "$var" in 00:18:49.799 15:37:20 -- accel/accel.sh@19 -- # IFS=: 00:18:49.799 15:37:20 -- accel/accel.sh@19 -- # read -r var val 00:18:49.799 15:37:20 -- accel/accel.sh@20 -- # val= 00:18:49.799 15:37:20 -- accel/accel.sh@21 -- # case "$var" in 00:18:49.799 15:37:20 -- accel/accel.sh@19 -- # IFS=: 00:18:49.799 15:37:20 -- accel/accel.sh@19 -- # read -r var val 00:18:49.799 15:37:20 -- accel/accel.sh@20 -- # val= 00:18:49.799 15:37:20 -- accel/accel.sh@21 -- # case "$var" in 00:18:49.799 15:37:20 -- accel/accel.sh@19 -- # IFS=: 00:18:49.799 15:37:20 -- accel/accel.sh@19 -- # read -r var val 00:18:51.248 15:37:21 -- accel/accel.sh@20 -- # val= 00:18:51.249 15:37:21 -- accel/accel.sh@21 -- # case "$var" in 00:18:51.249 15:37:21 -- accel/accel.sh@19 -- # IFS=: 00:18:51.249 15:37:21 -- accel/accel.sh@19 -- # read -r var val 00:18:51.249 15:37:21 -- accel/accel.sh@20 -- # val= 00:18:51.249 15:37:21 -- accel/accel.sh@21 -- # case "$var" in 00:18:51.249 15:37:21 -- accel/accel.sh@19 -- # IFS=: 00:18:51.249 15:37:21 -- accel/accel.sh@19 -- # read -r var val 00:18:51.249 15:37:21 -- accel/accel.sh@20 -- # val= 00:18:51.249 15:37:21 -- accel/accel.sh@21 -- # case "$var" in 00:18:51.249 15:37:21 -- accel/accel.sh@19 -- # IFS=: 00:18:51.249 15:37:21 -- accel/accel.sh@19 -- # read -r var val 00:18:51.249 15:37:21 -- accel/accel.sh@20 -- # val= 00:18:51.249 15:37:21 -- accel/accel.sh@21 -- # case "$var" in 00:18:51.249 15:37:21 -- accel/accel.sh@19 -- # IFS=: 00:18:51.249 15:37:21 -- accel/accel.sh@19 -- # read -r var val 00:18:51.249 15:37:21 -- accel/accel.sh@20 -- # val= 00:18:51.249 15:37:21 -- accel/accel.sh@21 -- # case "$var" in 00:18:51.249 15:37:21 -- accel/accel.sh@19 -- # IFS=: 00:18:51.249 15:37:21 -- accel/accel.sh@19 -- # read -r var val 00:18:51.249 ************************************ 00:18:51.249 END TEST accel_dif_generate_copy 00:18:51.249 ************************************ 00:18:51.249 15:37:21 -- accel/accel.sh@20 -- # val= 00:18:51.249 15:37:21 -- accel/accel.sh@21 -- # case "$var" in 00:18:51.249 15:37:21 -- accel/accel.sh@19 -- # IFS=: 00:18:51.249 15:37:21 -- accel/accel.sh@19 -- # read -r var val 00:18:51.249 15:37:21 -- accel/accel.sh@27 -- # [[ -n software ]] 00:18:51.249 15:37:21 -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:18:51.249 15:37:21 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:51.249 00:18:51.249 real 0m1.530s 00:18:51.249 user 0m1.324s 00:18:51.249 sys 0m0.112s 00:18:51.249 15:37:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:51.249 15:37:21 -- common/autotest_common.sh@10 -- # set +x 00:18:51.249 15:37:21 -- accel/accel.sh@115 -- # [[ y == y ]] 00:18:51.249 15:37:21 -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:18:51.249 15:37:21 -- common/autotest_common.sh@1087 -- # '[' 8 -le 1 ']' 00:18:51.249 15:37:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:51.249 15:37:21 -- common/autotest_common.sh@10 -- # set +x 00:18:51.249 ************************************ 00:18:51.249 START TEST accel_comp 00:18:51.249 ************************************ 00:18:51.249 15:37:21 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:18:51.249 15:37:21 -- accel/accel.sh@16 -- # local accel_opc 00:18:51.249 15:37:21 -- accel/accel.sh@17 -- # local accel_module 00:18:51.249 15:37:21 -- accel/accel.sh@19 -- # IFS=: 00:18:51.249 15:37:21 -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:18:51.249 15:37:21 -- accel/accel.sh@19 -- # read -r var val 00:18:51.249 15:37:21 -- accel/accel.sh@12 -- # build_accel_config 00:18:51.249 15:37:21 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:18:51.249 15:37:21 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:51.249 15:37:21 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:51.249 15:37:21 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:51.249 15:37:21 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:51.249 15:37:21 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:51.249 15:37:21 -- accel/accel.sh@40 -- # local IFS=, 00:18:51.249 15:37:21 -- accel/accel.sh@41 -- # jq -r . 00:18:51.249 [2024-04-26 15:37:21.366708] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:18:51.249 [2024-04-26 15:37:21.366822] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63981 ] 00:18:51.249 [2024-04-26 15:37:21.505645] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:51.506 [2024-04-26 15:37:21.623060] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:51.506 15:37:21 -- accel/accel.sh@20 -- # val= 00:18:51.506 15:37:21 -- accel/accel.sh@21 -- # case "$var" in 00:18:51.506 15:37:21 -- accel/accel.sh@19 -- # IFS=: 00:18:51.506 15:37:21 -- accel/accel.sh@19 -- # read -r var val 00:18:51.506 15:37:21 -- accel/accel.sh@20 -- # val= 00:18:51.506 15:37:21 -- accel/accel.sh@21 -- # case "$var" in 00:18:51.506 15:37:21 -- accel/accel.sh@19 -- # IFS=: 00:18:51.506 15:37:21 -- accel/accel.sh@19 -- # read -r var val 00:18:51.506 15:37:21 -- accel/accel.sh@20 -- # val= 00:18:51.506 15:37:21 -- accel/accel.sh@21 -- # case "$var" in 00:18:51.506 15:37:21 -- accel/accel.sh@19 -- # IFS=: 00:18:51.506 15:37:21 -- accel/accel.sh@19 -- # read -r var val 00:18:51.506 15:37:21 -- accel/accel.sh@20 -- # val=0x1 00:18:51.506 15:37:21 -- accel/accel.sh@21 -- # case "$var" in 00:18:51.506 15:37:21 -- accel/accel.sh@19 -- # IFS=: 00:18:51.506 15:37:21 -- accel/accel.sh@19 -- # read -r var val 00:18:51.506 15:37:21 -- accel/accel.sh@20 -- # val= 00:18:51.506 15:37:21 -- accel/accel.sh@21 -- # case "$var" in 00:18:51.506 15:37:21 -- accel/accel.sh@19 -- # IFS=: 00:18:51.506 15:37:21 -- accel/accel.sh@19 -- # read -r var val 00:18:51.506 15:37:21 -- accel/accel.sh@20 -- # val= 00:18:51.506 15:37:21 -- accel/accel.sh@21 -- # case "$var" in 00:18:51.506 15:37:21 -- accel/accel.sh@19 -- # IFS=: 00:18:51.506 15:37:21 -- accel/accel.sh@19 -- # read -r var val 00:18:51.506 15:37:21 -- accel/accel.sh@20 -- # val=compress 00:18:51.506 15:37:21 -- accel/accel.sh@21 -- # case "$var" in 00:18:51.506 15:37:21 -- accel/accel.sh@23 -- # accel_opc=compress 00:18:51.506 15:37:21 -- accel/accel.sh@19 -- # IFS=: 00:18:51.506 15:37:21 -- accel/accel.sh@19 -- # read -r var val 00:18:51.506 15:37:21 -- accel/accel.sh@20 -- # val='4096 bytes' 00:18:51.506 15:37:21 -- accel/accel.sh@21 -- # case "$var" in 00:18:51.506 15:37:21 -- accel/accel.sh@19 -- # IFS=: 00:18:51.506 15:37:21 -- accel/accel.sh@19 -- # read -r var val 00:18:51.506 15:37:21 -- accel/accel.sh@20 -- # val= 00:18:51.507 15:37:21 -- accel/accel.sh@21 -- # case "$var" in 00:18:51.507 15:37:21 -- accel/accel.sh@19 -- # IFS=: 00:18:51.507 15:37:21 -- accel/accel.sh@19 -- # read -r var val 00:18:51.507 15:37:21 -- accel/accel.sh@20 -- # val=software 00:18:51.507 15:37:21 -- accel/accel.sh@21 -- # case "$var" in 00:18:51.507 15:37:21 -- accel/accel.sh@22 -- # accel_module=software 00:18:51.507 15:37:21 -- accel/accel.sh@19 -- # IFS=: 00:18:51.507 15:37:21 -- accel/accel.sh@19 -- # read -r var val 00:18:51.507 15:37:21 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:18:51.507 15:37:21 -- accel/accel.sh@21 -- # case "$var" in 00:18:51.507 15:37:21 -- accel/accel.sh@19 -- # IFS=: 00:18:51.507 15:37:21 -- accel/accel.sh@19 -- # read -r var val 00:18:51.507 15:37:21 -- accel/accel.sh@20 -- # val=32 00:18:51.507 15:37:21 -- accel/accel.sh@21 -- # case "$var" in 00:18:51.507 15:37:21 -- accel/accel.sh@19 -- # IFS=: 00:18:51.507 15:37:21 -- accel/accel.sh@19 -- # read -r var val 00:18:51.507 15:37:21 -- accel/accel.sh@20 -- # val=32 00:18:51.507 15:37:21 -- accel/accel.sh@21 -- # case "$var" in 00:18:51.507 15:37:21 -- accel/accel.sh@19 -- # IFS=: 00:18:51.507 15:37:21 -- accel/accel.sh@19 -- # read -r var val 00:18:51.507 15:37:21 -- accel/accel.sh@20 -- # val=1 00:18:51.507 15:37:21 -- accel/accel.sh@21 -- # case "$var" in 00:18:51.507 15:37:21 -- accel/accel.sh@19 -- # IFS=: 00:18:51.507 15:37:21 -- accel/accel.sh@19 -- # read -r var val 00:18:51.507 15:37:21 -- accel/accel.sh@20 -- # val='1 seconds' 00:18:51.507 15:37:21 -- accel/accel.sh@21 -- # case "$var" in 00:18:51.507 15:37:21 -- accel/accel.sh@19 -- # IFS=: 00:18:51.507 15:37:21 -- accel/accel.sh@19 -- # read -r var val 00:18:51.507 15:37:21 -- accel/accel.sh@20 -- # val=No 00:18:51.507 15:37:21 -- accel/accel.sh@21 -- # case "$var" in 00:18:51.507 15:37:21 -- accel/accel.sh@19 -- # IFS=: 00:18:51.507 15:37:21 -- accel/accel.sh@19 -- # read -r var val 00:18:51.507 15:37:21 -- accel/accel.sh@20 -- # val= 00:18:51.507 15:37:21 -- accel/accel.sh@21 -- # case "$var" in 00:18:51.507 15:37:21 -- accel/accel.sh@19 -- # IFS=: 00:18:51.507 15:37:21 -- accel/accel.sh@19 -- # read -r var val 00:18:51.507 15:37:21 -- accel/accel.sh@20 -- # val= 00:18:51.507 15:37:21 -- accel/accel.sh@21 -- # case "$var" in 00:18:51.507 15:37:21 -- accel/accel.sh@19 -- # IFS=: 00:18:51.507 15:37:21 -- accel/accel.sh@19 -- # read -r var val 00:18:52.880 15:37:22 -- accel/accel.sh@20 -- # val= 00:18:52.880 15:37:22 -- accel/accel.sh@21 -- # case "$var" in 00:18:52.880 15:37:22 -- accel/accel.sh@19 -- # IFS=: 00:18:52.880 15:37:22 -- accel/accel.sh@19 -- # read -r var val 00:18:52.880 15:37:22 -- accel/accel.sh@20 -- # val= 00:18:52.880 15:37:22 -- accel/accel.sh@21 -- # case "$var" in 00:18:52.880 15:37:22 -- accel/accel.sh@19 -- # IFS=: 00:18:52.880 15:37:22 -- accel/accel.sh@19 -- # read -r var val 00:18:52.880 15:37:22 -- accel/accel.sh@20 -- # val= 00:18:52.880 15:37:22 -- accel/accel.sh@21 -- # case "$var" in 00:18:52.880 15:37:22 -- accel/accel.sh@19 -- # IFS=: 00:18:52.880 15:37:22 -- accel/accel.sh@19 -- # read -r var val 00:18:52.880 15:37:22 -- accel/accel.sh@20 -- # val= 00:18:52.880 15:37:22 -- accel/accel.sh@21 -- # case "$var" in 00:18:52.880 15:37:22 -- accel/accel.sh@19 -- # IFS=: 00:18:52.880 15:37:22 -- accel/accel.sh@19 -- # read -r var val 00:18:52.880 15:37:22 -- accel/accel.sh@20 -- # val= 00:18:52.880 15:37:22 -- accel/accel.sh@21 -- # case "$var" in 00:18:52.880 15:37:22 -- accel/accel.sh@19 -- # IFS=: 00:18:52.880 15:37:22 -- accel/accel.sh@19 -- # read -r var val 00:18:52.880 15:37:22 -- accel/accel.sh@20 -- # val= 00:18:52.880 15:37:22 -- accel/accel.sh@21 -- # case "$var" in 00:18:52.880 15:37:22 -- accel/accel.sh@19 -- # IFS=: 00:18:52.880 15:37:22 -- accel/accel.sh@19 -- # read -r var val 00:18:52.880 15:37:22 -- accel/accel.sh@27 -- # [[ -n software ]] 00:18:52.880 15:37:22 -- accel/accel.sh@27 -- # [[ -n compress ]] 00:18:52.880 15:37:22 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:52.880 00:18:52.880 real 0m1.538s 00:18:52.880 user 0m1.319s 00:18:52.880 sys 0m0.122s 00:18:52.880 15:37:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:52.880 15:37:22 -- common/autotest_common.sh@10 -- # set +x 00:18:52.880 ************************************ 00:18:52.880 END TEST accel_comp 00:18:52.880 ************************************ 00:18:52.880 15:37:22 -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:18:52.880 15:37:22 -- common/autotest_common.sh@1087 -- # '[' 9 -le 1 ']' 00:18:52.880 15:37:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:52.880 15:37:22 -- common/autotest_common.sh@10 -- # set +x 00:18:52.880 ************************************ 00:18:52.880 START TEST accel_decomp 00:18:52.880 ************************************ 00:18:52.880 15:37:22 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:18:52.881 15:37:22 -- accel/accel.sh@16 -- # local accel_opc 00:18:52.881 15:37:22 -- accel/accel.sh@17 -- # local accel_module 00:18:52.881 15:37:22 -- accel/accel.sh@19 -- # IFS=: 00:18:52.881 15:37:22 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:18:52.881 15:37:22 -- accel/accel.sh@19 -- # read -r var val 00:18:52.881 15:37:22 -- accel/accel.sh@12 -- # build_accel_config 00:18:52.881 15:37:22 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:18:52.881 15:37:22 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:52.881 15:37:22 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:52.881 15:37:22 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:52.881 15:37:22 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:52.881 15:37:22 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:52.881 15:37:22 -- accel/accel.sh@40 -- # local IFS=, 00:18:52.881 15:37:22 -- accel/accel.sh@41 -- # jq -r . 00:18:52.881 [2024-04-26 15:37:23.015232] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:18:52.881 [2024-04-26 15:37:23.015332] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64020 ] 00:18:52.881 [2024-04-26 15:37:23.153328] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:53.139 [2024-04-26 15:37:23.269455] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:53.139 15:37:23 -- accel/accel.sh@20 -- # val= 00:18:53.139 15:37:23 -- accel/accel.sh@21 -- # case "$var" in 00:18:53.139 15:37:23 -- accel/accel.sh@19 -- # IFS=: 00:18:53.139 15:37:23 -- accel/accel.sh@19 -- # read -r var val 00:18:53.139 15:37:23 -- accel/accel.sh@20 -- # val= 00:18:53.139 15:37:23 -- accel/accel.sh@21 -- # case "$var" in 00:18:53.139 15:37:23 -- accel/accel.sh@19 -- # IFS=: 00:18:53.139 15:37:23 -- accel/accel.sh@19 -- # read -r var val 00:18:53.139 15:37:23 -- accel/accel.sh@20 -- # val= 00:18:53.139 15:37:23 -- accel/accel.sh@21 -- # case "$var" in 00:18:53.139 15:37:23 -- accel/accel.sh@19 -- # IFS=: 00:18:53.139 15:37:23 -- accel/accel.sh@19 -- # read -r var val 00:18:53.139 15:37:23 -- accel/accel.sh@20 -- # val=0x1 00:18:53.139 15:37:23 -- accel/accel.sh@21 -- # case "$var" in 00:18:53.139 15:37:23 -- accel/accel.sh@19 -- # IFS=: 00:18:53.139 15:37:23 -- accel/accel.sh@19 -- # read -r var val 00:18:53.139 15:37:23 -- accel/accel.sh@20 -- # val= 00:18:53.139 15:37:23 -- accel/accel.sh@21 -- # case "$var" in 00:18:53.139 15:37:23 -- accel/accel.sh@19 -- # IFS=: 00:18:53.139 15:37:23 -- accel/accel.sh@19 -- # read -r var val 00:18:53.139 15:37:23 -- accel/accel.sh@20 -- # val= 00:18:53.139 15:37:23 -- accel/accel.sh@21 -- # case "$var" in 00:18:53.139 15:37:23 -- accel/accel.sh@19 -- # IFS=: 00:18:53.139 15:37:23 -- accel/accel.sh@19 -- # read -r var val 00:18:53.139 15:37:23 -- accel/accel.sh@20 -- # val=decompress 00:18:53.139 15:37:23 -- accel/accel.sh@21 -- # case "$var" in 00:18:53.139 15:37:23 -- accel/accel.sh@23 -- # accel_opc=decompress 00:18:53.139 15:37:23 -- accel/accel.sh@19 -- # IFS=: 00:18:53.139 15:37:23 -- accel/accel.sh@19 -- # read -r var val 00:18:53.139 15:37:23 -- accel/accel.sh@20 -- # val='4096 bytes' 00:18:53.139 15:37:23 -- accel/accel.sh@21 -- # case "$var" in 00:18:53.139 15:37:23 -- accel/accel.sh@19 -- # IFS=: 00:18:53.139 15:37:23 -- accel/accel.sh@19 -- # read -r var val 00:18:53.139 15:37:23 -- accel/accel.sh@20 -- # val= 00:18:53.139 15:37:23 -- accel/accel.sh@21 -- # case "$var" in 00:18:53.139 15:37:23 -- accel/accel.sh@19 -- # IFS=: 00:18:53.139 15:37:23 -- accel/accel.sh@19 -- # read -r var val 00:18:53.139 15:37:23 -- accel/accel.sh@20 -- # val=software 00:18:53.139 15:37:23 -- accel/accel.sh@21 -- # case "$var" in 00:18:53.139 15:37:23 -- accel/accel.sh@22 -- # accel_module=software 00:18:53.139 15:37:23 -- accel/accel.sh@19 -- # IFS=: 00:18:53.139 15:37:23 -- accel/accel.sh@19 -- # read -r var val 00:18:53.139 15:37:23 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:18:53.139 15:37:23 -- accel/accel.sh@21 -- # case "$var" in 00:18:53.139 15:37:23 -- accel/accel.sh@19 -- # IFS=: 00:18:53.139 15:37:23 -- accel/accel.sh@19 -- # read -r var val 00:18:53.139 15:37:23 -- accel/accel.sh@20 -- # val=32 00:18:53.139 15:37:23 -- accel/accel.sh@21 -- # case "$var" in 00:18:53.139 15:37:23 -- accel/accel.sh@19 -- # IFS=: 00:18:53.139 15:37:23 -- accel/accel.sh@19 -- # read -r var val 00:18:53.139 15:37:23 -- accel/accel.sh@20 -- # val=32 00:18:53.139 15:37:23 -- accel/accel.sh@21 -- # case "$var" in 00:18:53.139 15:37:23 -- accel/accel.sh@19 -- # IFS=: 00:18:53.139 15:37:23 -- accel/accel.sh@19 -- # read -r var val 00:18:53.139 15:37:23 -- accel/accel.sh@20 -- # val=1 00:18:53.139 15:37:23 -- accel/accel.sh@21 -- # case "$var" in 00:18:53.139 15:37:23 -- accel/accel.sh@19 -- # IFS=: 00:18:53.139 15:37:23 -- accel/accel.sh@19 -- # read -r var val 00:18:53.139 15:37:23 -- accel/accel.sh@20 -- # val='1 seconds' 00:18:53.139 15:37:23 -- accel/accel.sh@21 -- # case "$var" in 00:18:53.139 15:37:23 -- accel/accel.sh@19 -- # IFS=: 00:18:53.139 15:37:23 -- accel/accel.sh@19 -- # read -r var val 00:18:53.139 15:37:23 -- accel/accel.sh@20 -- # val=Yes 00:18:53.139 15:37:23 -- accel/accel.sh@21 -- # case "$var" in 00:18:53.139 15:37:23 -- accel/accel.sh@19 -- # IFS=: 00:18:53.139 15:37:23 -- accel/accel.sh@19 -- # read -r var val 00:18:53.139 15:37:23 -- accel/accel.sh@20 -- # val= 00:18:53.139 15:37:23 -- accel/accel.sh@21 -- # case "$var" in 00:18:53.139 15:37:23 -- accel/accel.sh@19 -- # IFS=: 00:18:53.139 15:37:23 -- accel/accel.sh@19 -- # read -r var val 00:18:53.139 15:37:23 -- accel/accel.sh@20 -- # val= 00:18:53.139 15:37:23 -- accel/accel.sh@21 -- # case "$var" in 00:18:53.139 15:37:23 -- accel/accel.sh@19 -- # IFS=: 00:18:53.139 15:37:23 -- accel/accel.sh@19 -- # read -r var val 00:18:54.515 15:37:24 -- accel/accel.sh@20 -- # val= 00:18:54.515 15:37:24 -- accel/accel.sh@21 -- # case "$var" in 00:18:54.515 15:37:24 -- accel/accel.sh@19 -- # IFS=: 00:18:54.515 15:37:24 -- accel/accel.sh@19 -- # read -r var val 00:18:54.515 15:37:24 -- accel/accel.sh@20 -- # val= 00:18:54.515 15:37:24 -- accel/accel.sh@21 -- # case "$var" in 00:18:54.515 15:37:24 -- accel/accel.sh@19 -- # IFS=: 00:18:54.515 15:37:24 -- accel/accel.sh@19 -- # read -r var val 00:18:54.515 15:37:24 -- accel/accel.sh@20 -- # val= 00:18:54.515 15:37:24 -- accel/accel.sh@21 -- # case "$var" in 00:18:54.515 15:37:24 -- accel/accel.sh@19 -- # IFS=: 00:18:54.515 15:37:24 -- accel/accel.sh@19 -- # read -r var val 00:18:54.515 15:37:24 -- accel/accel.sh@20 -- # val= 00:18:54.515 15:37:24 -- accel/accel.sh@21 -- # case "$var" in 00:18:54.515 15:37:24 -- accel/accel.sh@19 -- # IFS=: 00:18:54.515 15:37:24 -- accel/accel.sh@19 -- # read -r var val 00:18:54.515 15:37:24 -- accel/accel.sh@20 -- # val= 00:18:54.515 15:37:24 -- accel/accel.sh@21 -- # case "$var" in 00:18:54.515 15:37:24 -- accel/accel.sh@19 -- # IFS=: 00:18:54.515 15:37:24 -- accel/accel.sh@19 -- # read -r var val 00:18:54.515 15:37:24 -- accel/accel.sh@20 -- # val= 00:18:54.515 15:37:24 -- accel/accel.sh@21 -- # case "$var" in 00:18:54.515 15:37:24 -- accel/accel.sh@19 -- # IFS=: 00:18:54.515 15:37:24 -- accel/accel.sh@19 -- # read -r var val 00:18:54.515 15:37:24 -- accel/accel.sh@27 -- # [[ -n software ]] 00:18:54.515 ************************************ 00:18:54.515 END TEST accel_decomp 00:18:54.515 ************************************ 00:18:54.515 15:37:24 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:18:54.515 15:37:24 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:54.515 00:18:54.515 real 0m1.529s 00:18:54.515 user 0m1.324s 00:18:54.515 sys 0m0.113s 00:18:54.515 15:37:24 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:54.515 15:37:24 -- common/autotest_common.sh@10 -- # set +x 00:18:54.515 15:37:24 -- accel/accel.sh@118 -- # run_test accel_decmop_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:18:54.515 15:37:24 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:18:54.515 15:37:24 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:54.515 15:37:24 -- common/autotest_common.sh@10 -- # set +x 00:18:54.515 ************************************ 00:18:54.515 START TEST accel_decmop_full 00:18:54.515 ************************************ 00:18:54.515 15:37:24 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:18:54.515 15:37:24 -- accel/accel.sh@16 -- # local accel_opc 00:18:54.515 15:37:24 -- accel/accel.sh@17 -- # local accel_module 00:18:54.516 15:37:24 -- accel/accel.sh@19 -- # IFS=: 00:18:54.516 15:37:24 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:18:54.516 15:37:24 -- accel/accel.sh@19 -- # read -r var val 00:18:54.516 15:37:24 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:18:54.516 15:37:24 -- accel/accel.sh@12 -- # build_accel_config 00:18:54.516 15:37:24 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:54.516 15:37:24 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:54.516 15:37:24 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:54.516 15:37:24 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:54.516 15:37:24 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:54.516 15:37:24 -- accel/accel.sh@40 -- # local IFS=, 00:18:54.516 15:37:24 -- accel/accel.sh@41 -- # jq -r . 00:18:54.516 [2024-04-26 15:37:24.664431] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:18:54.516 [2024-04-26 15:37:24.664507] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64059 ] 00:18:54.516 [2024-04-26 15:37:24.801183] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.775 [2024-04-26 15:37:24.930741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:54.775 15:37:24 -- accel/accel.sh@20 -- # val= 00:18:54.775 15:37:24 -- accel/accel.sh@21 -- # case "$var" in 00:18:54.775 15:37:24 -- accel/accel.sh@19 -- # IFS=: 00:18:54.775 15:37:24 -- accel/accel.sh@19 -- # read -r var val 00:18:54.775 15:37:24 -- accel/accel.sh@20 -- # val= 00:18:54.775 15:37:24 -- accel/accel.sh@21 -- # case "$var" in 00:18:54.775 15:37:24 -- accel/accel.sh@19 -- # IFS=: 00:18:54.775 15:37:24 -- accel/accel.sh@19 -- # read -r var val 00:18:54.775 15:37:24 -- accel/accel.sh@20 -- # val= 00:18:54.775 15:37:24 -- accel/accel.sh@21 -- # case "$var" in 00:18:54.775 15:37:24 -- accel/accel.sh@19 -- # IFS=: 00:18:54.775 15:37:24 -- accel/accel.sh@19 -- # read -r var val 00:18:54.775 15:37:24 -- accel/accel.sh@20 -- # val=0x1 00:18:54.775 15:37:24 -- accel/accel.sh@21 -- # case "$var" in 00:18:54.775 15:37:24 -- accel/accel.sh@19 -- # IFS=: 00:18:54.775 15:37:24 -- accel/accel.sh@19 -- # read -r var val 00:18:54.775 15:37:24 -- accel/accel.sh@20 -- # val= 00:18:54.775 15:37:24 -- accel/accel.sh@21 -- # case "$var" in 00:18:54.775 15:37:24 -- accel/accel.sh@19 -- # IFS=: 00:18:54.775 15:37:24 -- accel/accel.sh@19 -- # read -r var val 00:18:54.775 15:37:24 -- accel/accel.sh@20 -- # val= 00:18:54.775 15:37:24 -- accel/accel.sh@21 -- # case "$var" in 00:18:54.775 15:37:24 -- accel/accel.sh@19 -- # IFS=: 00:18:54.775 15:37:24 -- accel/accel.sh@19 -- # read -r var val 00:18:54.775 15:37:24 -- accel/accel.sh@20 -- # val=decompress 00:18:54.775 15:37:24 -- accel/accel.sh@21 -- # case "$var" in 00:18:54.775 15:37:24 -- accel/accel.sh@23 -- # accel_opc=decompress 00:18:54.775 15:37:24 -- accel/accel.sh@19 -- # IFS=: 00:18:54.775 15:37:24 -- accel/accel.sh@19 -- # read -r var val 00:18:54.775 15:37:24 -- accel/accel.sh@20 -- # val='111250 bytes' 00:18:54.775 15:37:24 -- accel/accel.sh@21 -- # case "$var" in 00:18:54.775 15:37:24 -- accel/accel.sh@19 -- # IFS=: 00:18:54.775 15:37:24 -- accel/accel.sh@19 -- # read -r var val 00:18:54.775 15:37:24 -- accel/accel.sh@20 -- # val= 00:18:54.775 15:37:24 -- accel/accel.sh@21 -- # case "$var" in 00:18:54.775 15:37:24 -- accel/accel.sh@19 -- # IFS=: 00:18:54.775 15:37:24 -- accel/accel.sh@19 -- # read -r var val 00:18:54.775 15:37:24 -- accel/accel.sh@20 -- # val=software 00:18:54.775 15:37:24 -- accel/accel.sh@21 -- # case "$var" in 00:18:54.775 15:37:24 -- accel/accel.sh@22 -- # accel_module=software 00:18:54.775 15:37:24 -- accel/accel.sh@19 -- # IFS=: 00:18:54.775 15:37:24 -- accel/accel.sh@19 -- # read -r var val 00:18:54.775 15:37:25 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:18:54.775 15:37:25 -- accel/accel.sh@21 -- # case "$var" in 00:18:54.775 15:37:25 -- accel/accel.sh@19 -- # IFS=: 00:18:54.775 15:37:25 -- accel/accel.sh@19 -- # read -r var val 00:18:54.775 15:37:25 -- accel/accel.sh@20 -- # val=32 00:18:54.775 15:37:25 -- accel/accel.sh@21 -- # case "$var" in 00:18:54.775 15:37:25 -- accel/accel.sh@19 -- # IFS=: 00:18:54.775 15:37:25 -- accel/accel.sh@19 -- # read -r var val 00:18:54.775 15:37:25 -- accel/accel.sh@20 -- # val=32 00:18:54.775 15:37:25 -- accel/accel.sh@21 -- # case "$var" in 00:18:54.775 15:37:25 -- accel/accel.sh@19 -- # IFS=: 00:18:54.775 15:37:25 -- accel/accel.sh@19 -- # read -r var val 00:18:54.775 15:37:25 -- accel/accel.sh@20 -- # val=1 00:18:54.775 15:37:25 -- accel/accel.sh@21 -- # case "$var" in 00:18:54.775 15:37:25 -- accel/accel.sh@19 -- # IFS=: 00:18:54.775 15:37:25 -- accel/accel.sh@19 -- # read -r var val 00:18:54.775 15:37:25 -- accel/accel.sh@20 -- # val='1 seconds' 00:18:54.775 15:37:25 -- accel/accel.sh@21 -- # case "$var" in 00:18:54.775 15:37:25 -- accel/accel.sh@19 -- # IFS=: 00:18:54.775 15:37:25 -- accel/accel.sh@19 -- # read -r var val 00:18:54.775 15:37:25 -- accel/accel.sh@20 -- # val=Yes 00:18:54.775 15:37:25 -- accel/accel.sh@21 -- # case "$var" in 00:18:54.775 15:37:25 -- accel/accel.sh@19 -- # IFS=: 00:18:54.775 15:37:25 -- accel/accel.sh@19 -- # read -r var val 00:18:54.775 15:37:25 -- accel/accel.sh@20 -- # val= 00:18:54.775 15:37:25 -- accel/accel.sh@21 -- # case "$var" in 00:18:54.775 15:37:25 -- accel/accel.sh@19 -- # IFS=: 00:18:54.775 15:37:25 -- accel/accel.sh@19 -- # read -r var val 00:18:54.775 15:37:25 -- accel/accel.sh@20 -- # val= 00:18:54.775 15:37:25 -- accel/accel.sh@21 -- # case "$var" in 00:18:54.775 15:37:25 -- accel/accel.sh@19 -- # IFS=: 00:18:54.775 15:37:25 -- accel/accel.sh@19 -- # read -r var val 00:18:56.149 15:37:26 -- accel/accel.sh@20 -- # val= 00:18:56.149 15:37:26 -- accel/accel.sh@21 -- # case "$var" in 00:18:56.149 15:37:26 -- accel/accel.sh@19 -- # IFS=: 00:18:56.149 15:37:26 -- accel/accel.sh@19 -- # read -r var val 00:18:56.149 15:37:26 -- accel/accel.sh@20 -- # val= 00:18:56.149 15:37:26 -- accel/accel.sh@21 -- # case "$var" in 00:18:56.149 15:37:26 -- accel/accel.sh@19 -- # IFS=: 00:18:56.149 15:37:26 -- accel/accel.sh@19 -- # read -r var val 00:18:56.149 15:37:26 -- accel/accel.sh@20 -- # val= 00:18:56.149 15:37:26 -- accel/accel.sh@21 -- # case "$var" in 00:18:56.149 15:37:26 -- accel/accel.sh@19 -- # IFS=: 00:18:56.149 15:37:26 -- accel/accel.sh@19 -- # read -r var val 00:18:56.149 15:37:26 -- accel/accel.sh@20 -- # val= 00:18:56.149 15:37:26 -- accel/accel.sh@21 -- # case "$var" in 00:18:56.149 15:37:26 -- accel/accel.sh@19 -- # IFS=: 00:18:56.149 15:37:26 -- accel/accel.sh@19 -- # read -r var val 00:18:56.149 15:37:26 -- accel/accel.sh@20 -- # val= 00:18:56.149 15:37:26 -- accel/accel.sh@21 -- # case "$var" in 00:18:56.149 15:37:26 -- accel/accel.sh@19 -- # IFS=: 00:18:56.149 15:37:26 -- accel/accel.sh@19 -- # read -r var val 00:18:56.149 15:37:26 -- accel/accel.sh@20 -- # val= 00:18:56.149 15:37:26 -- accel/accel.sh@21 -- # case "$var" in 00:18:56.149 15:37:26 -- accel/accel.sh@19 -- # IFS=: 00:18:56.149 15:37:26 -- accel/accel.sh@19 -- # read -r var val 00:18:56.149 15:37:26 -- accel/accel.sh@27 -- # [[ -n software ]] 00:18:56.149 15:37:26 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:18:56.149 15:37:26 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:56.149 00:18:56.149 real 0m1.561s 00:18:56.149 user 0m1.340s 00:18:56.149 sys 0m0.126s 00:18:56.149 15:37:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:56.149 15:37:26 -- common/autotest_common.sh@10 -- # set +x 00:18:56.149 ************************************ 00:18:56.149 END TEST accel_decmop_full 00:18:56.149 ************************************ 00:18:56.149 15:37:26 -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:18:56.149 15:37:26 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:18:56.149 15:37:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:56.150 15:37:26 -- common/autotest_common.sh@10 -- # set +x 00:18:56.150 ************************************ 00:18:56.150 START TEST accel_decomp_mcore 00:18:56.150 ************************************ 00:18:56.150 15:37:26 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:18:56.150 15:37:26 -- accel/accel.sh@16 -- # local accel_opc 00:18:56.150 15:37:26 -- accel/accel.sh@17 -- # local accel_module 00:18:56.150 15:37:26 -- accel/accel.sh@19 -- # IFS=: 00:18:56.150 15:37:26 -- accel/accel.sh@19 -- # read -r var val 00:18:56.150 15:37:26 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:18:56.150 15:37:26 -- accel/accel.sh@12 -- # build_accel_config 00:18:56.150 15:37:26 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:18:56.150 15:37:26 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:56.150 15:37:26 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:56.150 15:37:26 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:56.150 15:37:26 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:56.150 15:37:26 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:56.150 15:37:26 -- accel/accel.sh@40 -- # local IFS=, 00:18:56.150 15:37:26 -- accel/accel.sh@41 -- # jq -r . 00:18:56.150 [2024-04-26 15:37:26.339253] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:18:56.150 [2024-04-26 15:37:26.339356] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64103 ] 00:18:56.408 [2024-04-26 15:37:26.477241] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:56.408 [2024-04-26 15:37:26.599258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:56.408 [2024-04-26 15:37:26.599398] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:56.408 [2024-04-26 15:37:26.600221] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:56.408 [2024-04-26 15:37:26.600236] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:56.408 15:37:26 -- accel/accel.sh@20 -- # val= 00:18:56.408 15:37:26 -- accel/accel.sh@21 -- # case "$var" in 00:18:56.408 15:37:26 -- accel/accel.sh@19 -- # IFS=: 00:18:56.408 15:37:26 -- accel/accel.sh@19 -- # read -r var val 00:18:56.408 15:37:26 -- accel/accel.sh@20 -- # val= 00:18:56.408 15:37:26 -- accel/accel.sh@21 -- # case "$var" in 00:18:56.408 15:37:26 -- accel/accel.sh@19 -- # IFS=: 00:18:56.408 15:37:26 -- accel/accel.sh@19 -- # read -r var val 00:18:56.408 15:37:26 -- accel/accel.sh@20 -- # val= 00:18:56.408 15:37:26 -- accel/accel.sh@21 -- # case "$var" in 00:18:56.408 15:37:26 -- accel/accel.sh@19 -- # IFS=: 00:18:56.408 15:37:26 -- accel/accel.sh@19 -- # read -r var val 00:18:56.408 15:37:26 -- accel/accel.sh@20 -- # val=0xf 00:18:56.408 15:37:26 -- accel/accel.sh@21 -- # case "$var" in 00:18:56.408 15:37:26 -- accel/accel.sh@19 -- # IFS=: 00:18:56.408 15:37:26 -- accel/accel.sh@19 -- # read -r var val 00:18:56.408 15:37:26 -- accel/accel.sh@20 -- # val= 00:18:56.408 15:37:26 -- accel/accel.sh@21 -- # case "$var" in 00:18:56.408 15:37:26 -- accel/accel.sh@19 -- # IFS=: 00:18:56.408 15:37:26 -- accel/accel.sh@19 -- # read -r var val 00:18:56.409 15:37:26 -- accel/accel.sh@20 -- # val= 00:18:56.409 15:37:26 -- accel/accel.sh@21 -- # case "$var" in 00:18:56.409 15:37:26 -- accel/accel.sh@19 -- # IFS=: 00:18:56.409 15:37:26 -- accel/accel.sh@19 -- # read -r var val 00:18:56.409 15:37:26 -- accel/accel.sh@20 -- # val=decompress 00:18:56.409 15:37:26 -- accel/accel.sh@21 -- # case "$var" in 00:18:56.409 15:37:26 -- accel/accel.sh@23 -- # accel_opc=decompress 00:18:56.409 15:37:26 -- accel/accel.sh@19 -- # IFS=: 00:18:56.409 15:37:26 -- accel/accel.sh@19 -- # read -r var val 00:18:56.409 15:37:26 -- accel/accel.sh@20 -- # val='4096 bytes' 00:18:56.409 15:37:26 -- accel/accel.sh@21 -- # case "$var" in 00:18:56.409 15:37:26 -- accel/accel.sh@19 -- # IFS=: 00:18:56.409 15:37:26 -- accel/accel.sh@19 -- # read -r var val 00:18:56.409 15:37:26 -- accel/accel.sh@20 -- # val= 00:18:56.409 15:37:26 -- accel/accel.sh@21 -- # case "$var" in 00:18:56.409 15:37:26 -- accel/accel.sh@19 -- # IFS=: 00:18:56.409 15:37:26 -- accel/accel.sh@19 -- # read -r var val 00:18:56.409 15:37:26 -- accel/accel.sh@20 -- # val=software 00:18:56.409 15:37:26 -- accel/accel.sh@21 -- # case "$var" in 00:18:56.409 15:37:26 -- accel/accel.sh@22 -- # accel_module=software 00:18:56.409 15:37:26 -- accel/accel.sh@19 -- # IFS=: 00:18:56.409 15:37:26 -- accel/accel.sh@19 -- # read -r var val 00:18:56.409 15:37:26 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:18:56.409 15:37:26 -- accel/accel.sh@21 -- # case "$var" in 00:18:56.409 15:37:26 -- accel/accel.sh@19 -- # IFS=: 00:18:56.409 15:37:26 -- accel/accel.sh@19 -- # read -r var val 00:18:56.409 15:37:26 -- accel/accel.sh@20 -- # val=32 00:18:56.409 15:37:26 -- accel/accel.sh@21 -- # case "$var" in 00:18:56.409 15:37:26 -- accel/accel.sh@19 -- # IFS=: 00:18:56.409 15:37:26 -- accel/accel.sh@19 -- # read -r var val 00:18:56.409 15:37:26 -- accel/accel.sh@20 -- # val=32 00:18:56.409 15:37:26 -- accel/accel.sh@21 -- # case "$var" in 00:18:56.409 15:37:26 -- accel/accel.sh@19 -- # IFS=: 00:18:56.409 15:37:26 -- accel/accel.sh@19 -- # read -r var val 00:18:56.409 15:37:26 -- accel/accel.sh@20 -- # val=1 00:18:56.409 15:37:26 -- accel/accel.sh@21 -- # case "$var" in 00:18:56.409 15:37:26 -- accel/accel.sh@19 -- # IFS=: 00:18:56.409 15:37:26 -- accel/accel.sh@19 -- # read -r var val 00:18:56.409 15:37:26 -- accel/accel.sh@20 -- # val='1 seconds' 00:18:56.409 15:37:26 -- accel/accel.sh@21 -- # case "$var" in 00:18:56.409 15:37:26 -- accel/accel.sh@19 -- # IFS=: 00:18:56.409 15:37:26 -- accel/accel.sh@19 -- # read -r var val 00:18:56.409 15:37:26 -- accel/accel.sh@20 -- # val=Yes 00:18:56.409 15:37:26 -- accel/accel.sh@21 -- # case "$var" in 00:18:56.409 15:37:26 -- accel/accel.sh@19 -- # IFS=: 00:18:56.409 15:37:26 -- accel/accel.sh@19 -- # read -r var val 00:18:56.409 15:37:26 -- accel/accel.sh@20 -- # val= 00:18:56.409 15:37:26 -- accel/accel.sh@21 -- # case "$var" in 00:18:56.409 15:37:26 -- accel/accel.sh@19 -- # IFS=: 00:18:56.409 15:37:26 -- accel/accel.sh@19 -- # read -r var val 00:18:56.409 15:37:26 -- accel/accel.sh@20 -- # val= 00:18:56.409 15:37:26 -- accel/accel.sh@21 -- # case "$var" in 00:18:56.409 15:37:26 -- accel/accel.sh@19 -- # IFS=: 00:18:56.409 15:37:26 -- accel/accel.sh@19 -- # read -r var val 00:18:57.784 15:37:27 -- accel/accel.sh@20 -- # val= 00:18:57.784 15:37:27 -- accel/accel.sh@21 -- # case "$var" in 00:18:57.784 15:37:27 -- accel/accel.sh@19 -- # IFS=: 00:18:57.784 15:37:27 -- accel/accel.sh@19 -- # read -r var val 00:18:57.784 15:37:27 -- accel/accel.sh@20 -- # val= 00:18:57.784 15:37:27 -- accel/accel.sh@21 -- # case "$var" in 00:18:57.784 15:37:27 -- accel/accel.sh@19 -- # IFS=: 00:18:57.784 15:37:27 -- accel/accel.sh@19 -- # read -r var val 00:18:57.784 15:37:27 -- accel/accel.sh@20 -- # val= 00:18:57.784 15:37:27 -- accel/accel.sh@21 -- # case "$var" in 00:18:57.784 15:37:27 -- accel/accel.sh@19 -- # IFS=: 00:18:57.784 15:37:27 -- accel/accel.sh@19 -- # read -r var val 00:18:57.784 15:37:27 -- accel/accel.sh@20 -- # val= 00:18:57.784 15:37:27 -- accel/accel.sh@21 -- # case "$var" in 00:18:57.784 15:37:27 -- accel/accel.sh@19 -- # IFS=: 00:18:57.784 15:37:27 -- accel/accel.sh@19 -- # read -r var val 00:18:57.784 15:37:27 -- accel/accel.sh@20 -- # val= 00:18:57.784 15:37:27 -- accel/accel.sh@21 -- # case "$var" in 00:18:57.784 15:37:27 -- accel/accel.sh@19 -- # IFS=: 00:18:57.784 15:37:27 -- accel/accel.sh@19 -- # read -r var val 00:18:57.784 15:37:27 -- accel/accel.sh@20 -- # val= 00:18:57.784 15:37:27 -- accel/accel.sh@21 -- # case "$var" in 00:18:57.784 15:37:27 -- accel/accel.sh@19 -- # IFS=: 00:18:57.784 15:37:27 -- accel/accel.sh@19 -- # read -r var val 00:18:57.784 15:37:27 -- accel/accel.sh@20 -- # val= 00:18:57.784 15:37:27 -- accel/accel.sh@21 -- # case "$var" in 00:18:57.784 15:37:27 -- accel/accel.sh@19 -- # IFS=: 00:18:57.784 15:37:27 -- accel/accel.sh@19 -- # read -r var val 00:18:57.784 15:37:27 -- accel/accel.sh@20 -- # val= 00:18:57.784 15:37:27 -- accel/accel.sh@21 -- # case "$var" in 00:18:57.784 15:37:27 -- accel/accel.sh@19 -- # IFS=: 00:18:57.784 15:37:27 -- accel/accel.sh@19 -- # read -r var val 00:18:57.784 15:37:27 -- accel/accel.sh@20 -- # val= 00:18:57.784 15:37:27 -- accel/accel.sh@21 -- # case "$var" in 00:18:57.784 15:37:27 -- accel/accel.sh@19 -- # IFS=: 00:18:57.784 15:37:27 -- accel/accel.sh@19 -- # read -r var val 00:18:57.784 15:37:27 -- accel/accel.sh@27 -- # [[ -n software ]] 00:18:57.784 15:37:27 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:18:57.784 15:37:27 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:57.784 00:18:57.784 real 0m1.583s 00:18:57.784 user 0m4.832s 00:18:57.784 sys 0m0.130s 00:18:57.784 15:37:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:57.784 15:37:27 -- common/autotest_common.sh@10 -- # set +x 00:18:57.784 ************************************ 00:18:57.784 END TEST accel_decomp_mcore 00:18:57.784 ************************************ 00:18:57.784 15:37:27 -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:18:57.784 15:37:27 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:18:57.784 15:37:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:57.784 15:37:27 -- common/autotest_common.sh@10 -- # set +x 00:18:57.784 ************************************ 00:18:57.784 START TEST accel_decomp_full_mcore 00:18:57.784 ************************************ 00:18:57.784 15:37:28 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:18:57.784 15:37:28 -- accel/accel.sh@16 -- # local accel_opc 00:18:57.784 15:37:28 -- accel/accel.sh@17 -- # local accel_module 00:18:57.784 15:37:28 -- accel/accel.sh@19 -- # IFS=: 00:18:57.784 15:37:28 -- accel/accel.sh@19 -- # read -r var val 00:18:57.784 15:37:28 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:18:57.784 15:37:28 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:18:57.784 15:37:28 -- accel/accel.sh@12 -- # build_accel_config 00:18:57.784 15:37:28 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:57.784 15:37:28 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:57.784 15:37:28 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:57.784 15:37:28 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:57.784 15:37:28 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:57.784 15:37:28 -- accel/accel.sh@40 -- # local IFS=, 00:18:57.784 15:37:28 -- accel/accel.sh@41 -- # jq -r . 00:18:57.784 [2024-04-26 15:37:28.032423] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:18:57.784 [2024-04-26 15:37:28.032517] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64144 ] 00:18:58.081 [2024-04-26 15:37:28.168163] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:58.081 [2024-04-26 15:37:28.298638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:18:58.081 [2024-04-26 15:37:28.298762] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:18:58.081 [2024-04-26 15:37:28.299835] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:18:58.081 [2024-04-26 15:37:28.299865] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:58.346 15:37:28 -- accel/accel.sh@20 -- # val= 00:18:58.346 15:37:28 -- accel/accel.sh@21 -- # case "$var" in 00:18:58.346 15:37:28 -- accel/accel.sh@19 -- # IFS=: 00:18:58.346 15:37:28 -- accel/accel.sh@19 -- # read -r var val 00:18:58.346 15:37:28 -- accel/accel.sh@20 -- # val= 00:18:58.346 15:37:28 -- accel/accel.sh@21 -- # case "$var" in 00:18:58.346 15:37:28 -- accel/accel.sh@19 -- # IFS=: 00:18:58.346 15:37:28 -- accel/accel.sh@19 -- # read -r var val 00:18:58.346 15:37:28 -- accel/accel.sh@20 -- # val= 00:18:58.346 15:37:28 -- accel/accel.sh@21 -- # case "$var" in 00:18:58.346 15:37:28 -- accel/accel.sh@19 -- # IFS=: 00:18:58.346 15:37:28 -- accel/accel.sh@19 -- # read -r var val 00:18:58.346 15:37:28 -- accel/accel.sh@20 -- # val=0xf 00:18:58.346 15:37:28 -- accel/accel.sh@21 -- # case "$var" in 00:18:58.346 15:37:28 -- accel/accel.sh@19 -- # IFS=: 00:18:58.346 15:37:28 -- accel/accel.sh@19 -- # read -r var val 00:18:58.346 15:37:28 -- accel/accel.sh@20 -- # val= 00:18:58.346 15:37:28 -- accel/accel.sh@21 -- # case "$var" in 00:18:58.346 15:37:28 -- accel/accel.sh@19 -- # IFS=: 00:18:58.346 15:37:28 -- accel/accel.sh@19 -- # read -r var val 00:18:58.346 15:37:28 -- accel/accel.sh@20 -- # val= 00:18:58.346 15:37:28 -- accel/accel.sh@21 -- # case "$var" in 00:18:58.346 15:37:28 -- accel/accel.sh@19 -- # IFS=: 00:18:58.346 15:37:28 -- accel/accel.sh@19 -- # read -r var val 00:18:58.346 15:37:28 -- accel/accel.sh@20 -- # val=decompress 00:18:58.346 15:37:28 -- accel/accel.sh@21 -- # case "$var" in 00:18:58.346 15:37:28 -- accel/accel.sh@23 -- # accel_opc=decompress 00:18:58.346 15:37:28 -- accel/accel.sh@19 -- # IFS=: 00:18:58.346 15:37:28 -- accel/accel.sh@19 -- # read -r var val 00:18:58.346 15:37:28 -- accel/accel.sh@20 -- # val='111250 bytes' 00:18:58.346 15:37:28 -- accel/accel.sh@21 -- # case "$var" in 00:18:58.346 15:37:28 -- accel/accel.sh@19 -- # IFS=: 00:18:58.346 15:37:28 -- accel/accel.sh@19 -- # read -r var val 00:18:58.346 15:37:28 -- accel/accel.sh@20 -- # val= 00:18:58.346 15:37:28 -- accel/accel.sh@21 -- # case "$var" in 00:18:58.346 15:37:28 -- accel/accel.sh@19 -- # IFS=: 00:18:58.346 15:37:28 -- accel/accel.sh@19 -- # read -r var val 00:18:58.346 15:37:28 -- accel/accel.sh@20 -- # val=software 00:18:58.346 15:37:28 -- accel/accel.sh@21 -- # case "$var" in 00:18:58.346 15:37:28 -- accel/accel.sh@22 -- # accel_module=software 00:18:58.346 15:37:28 -- accel/accel.sh@19 -- # IFS=: 00:18:58.346 15:37:28 -- accel/accel.sh@19 -- # read -r var val 00:18:58.346 15:37:28 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:18:58.346 15:37:28 -- accel/accel.sh@21 -- # case "$var" in 00:18:58.346 15:37:28 -- accel/accel.sh@19 -- # IFS=: 00:18:58.346 15:37:28 -- accel/accel.sh@19 -- # read -r var val 00:18:58.346 15:37:28 -- accel/accel.sh@20 -- # val=32 00:18:58.346 15:37:28 -- accel/accel.sh@21 -- # case "$var" in 00:18:58.346 15:37:28 -- accel/accel.sh@19 -- # IFS=: 00:18:58.346 15:37:28 -- accel/accel.sh@19 -- # read -r var val 00:18:58.346 15:37:28 -- accel/accel.sh@20 -- # val=32 00:18:58.346 15:37:28 -- accel/accel.sh@21 -- # case "$var" in 00:18:58.346 15:37:28 -- accel/accel.sh@19 -- # IFS=: 00:18:58.346 15:37:28 -- accel/accel.sh@19 -- # read -r var val 00:18:58.346 15:37:28 -- accel/accel.sh@20 -- # val=1 00:18:58.346 15:37:28 -- accel/accel.sh@21 -- # case "$var" in 00:18:58.346 15:37:28 -- accel/accel.sh@19 -- # IFS=: 00:18:58.346 15:37:28 -- accel/accel.sh@19 -- # read -r var val 00:18:58.346 15:37:28 -- accel/accel.sh@20 -- # val='1 seconds' 00:18:58.346 15:37:28 -- accel/accel.sh@21 -- # case "$var" in 00:18:58.346 15:37:28 -- accel/accel.sh@19 -- # IFS=: 00:18:58.346 15:37:28 -- accel/accel.sh@19 -- # read -r var val 00:18:58.346 15:37:28 -- accel/accel.sh@20 -- # val=Yes 00:18:58.346 15:37:28 -- accel/accel.sh@21 -- # case "$var" in 00:18:58.346 15:37:28 -- accel/accel.sh@19 -- # IFS=: 00:18:58.346 15:37:28 -- accel/accel.sh@19 -- # read -r var val 00:18:58.346 15:37:28 -- accel/accel.sh@20 -- # val= 00:18:58.346 15:37:28 -- accel/accel.sh@21 -- # case "$var" in 00:18:58.346 15:37:28 -- accel/accel.sh@19 -- # IFS=: 00:18:58.346 15:37:28 -- accel/accel.sh@19 -- # read -r var val 00:18:58.346 15:37:28 -- accel/accel.sh@20 -- # val= 00:18:58.346 15:37:28 -- accel/accel.sh@21 -- # case "$var" in 00:18:58.346 15:37:28 -- accel/accel.sh@19 -- # IFS=: 00:18:58.346 15:37:28 -- accel/accel.sh@19 -- # read -r var val 00:18:59.719 15:37:29 -- accel/accel.sh@20 -- # val= 00:18:59.719 15:37:29 -- accel/accel.sh@21 -- # case "$var" in 00:18:59.719 15:37:29 -- accel/accel.sh@19 -- # IFS=: 00:18:59.719 15:37:29 -- accel/accel.sh@19 -- # read -r var val 00:18:59.719 15:37:29 -- accel/accel.sh@20 -- # val= 00:18:59.719 15:37:29 -- accel/accel.sh@21 -- # case "$var" in 00:18:59.719 15:37:29 -- accel/accel.sh@19 -- # IFS=: 00:18:59.719 15:37:29 -- accel/accel.sh@19 -- # read -r var val 00:18:59.719 15:37:29 -- accel/accel.sh@20 -- # val= 00:18:59.719 15:37:29 -- accel/accel.sh@21 -- # case "$var" in 00:18:59.719 15:37:29 -- accel/accel.sh@19 -- # IFS=: 00:18:59.719 15:37:29 -- accel/accel.sh@19 -- # read -r var val 00:18:59.719 15:37:29 -- accel/accel.sh@20 -- # val= 00:18:59.719 15:37:29 -- accel/accel.sh@21 -- # case "$var" in 00:18:59.719 15:37:29 -- accel/accel.sh@19 -- # IFS=: 00:18:59.719 15:37:29 -- accel/accel.sh@19 -- # read -r var val 00:18:59.719 15:37:29 -- accel/accel.sh@20 -- # val= 00:18:59.719 15:37:29 -- accel/accel.sh@21 -- # case "$var" in 00:18:59.719 15:37:29 -- accel/accel.sh@19 -- # IFS=: 00:18:59.719 15:37:29 -- accel/accel.sh@19 -- # read -r var val 00:18:59.719 15:37:29 -- accel/accel.sh@20 -- # val= 00:18:59.719 15:37:29 -- accel/accel.sh@21 -- # case "$var" in 00:18:59.719 15:37:29 -- accel/accel.sh@19 -- # IFS=: 00:18:59.719 15:37:29 -- accel/accel.sh@19 -- # read -r var val 00:18:59.719 15:37:29 -- accel/accel.sh@20 -- # val= 00:18:59.719 15:37:29 -- accel/accel.sh@21 -- # case "$var" in 00:18:59.719 15:37:29 -- accel/accel.sh@19 -- # IFS=: 00:18:59.719 15:37:29 -- accel/accel.sh@19 -- # read -r var val 00:18:59.719 15:37:29 -- accel/accel.sh@20 -- # val= 00:18:59.719 15:37:29 -- accel/accel.sh@21 -- # case "$var" in 00:18:59.719 15:37:29 -- accel/accel.sh@19 -- # IFS=: 00:18:59.719 15:37:29 -- accel/accel.sh@19 -- # read -r var val 00:18:59.719 15:37:29 -- accel/accel.sh@20 -- # val= 00:18:59.719 15:37:29 -- accel/accel.sh@21 -- # case "$var" in 00:18:59.719 15:37:29 -- accel/accel.sh@19 -- # IFS=: 00:18:59.719 15:37:29 -- accel/accel.sh@19 -- # read -r var val 00:18:59.719 15:37:29 -- accel/accel.sh@27 -- # [[ -n software ]] 00:18:59.719 15:37:29 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:18:59.719 15:37:29 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:18:59.719 ************************************ 00:18:59.719 END TEST accel_decomp_full_mcore 00:18:59.719 ************************************ 00:18:59.719 00:18:59.719 real 0m1.583s 00:18:59.719 user 0m4.828s 00:18:59.719 sys 0m0.138s 00:18:59.719 15:37:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:18:59.719 15:37:29 -- common/autotest_common.sh@10 -- # set +x 00:18:59.719 15:37:29 -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:18:59.719 15:37:29 -- common/autotest_common.sh@1087 -- # '[' 11 -le 1 ']' 00:18:59.719 15:37:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:18:59.719 15:37:29 -- common/autotest_common.sh@10 -- # set +x 00:18:59.719 ************************************ 00:18:59.719 START TEST accel_decomp_mthread 00:18:59.719 ************************************ 00:18:59.719 15:37:29 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:18:59.719 15:37:29 -- accel/accel.sh@16 -- # local accel_opc 00:18:59.719 15:37:29 -- accel/accel.sh@17 -- # local accel_module 00:18:59.719 15:37:29 -- accel/accel.sh@19 -- # IFS=: 00:18:59.719 15:37:29 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:18:59.719 15:37:29 -- accel/accel.sh@19 -- # read -r var val 00:18:59.719 15:37:29 -- accel/accel.sh@12 -- # build_accel_config 00:18:59.719 15:37:29 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:18:59.719 15:37:29 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:18:59.719 15:37:29 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:18:59.719 15:37:29 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:18:59.719 15:37:29 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:18:59.719 15:37:29 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:18:59.719 15:37:29 -- accel/accel.sh@40 -- # local IFS=, 00:18:59.719 15:37:29 -- accel/accel.sh@41 -- # jq -r . 00:18:59.719 [2024-04-26 15:37:29.726834] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:18:59.719 [2024-04-26 15:37:29.726939] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64186 ] 00:18:59.719 [2024-04-26 15:37:29.864519] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.719 [2024-04-26 15:37:30.002825] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:18:59.977 15:37:30 -- accel/accel.sh@20 -- # val= 00:18:59.977 15:37:30 -- accel/accel.sh@21 -- # case "$var" in 00:18:59.977 15:37:30 -- accel/accel.sh@19 -- # IFS=: 00:18:59.977 15:37:30 -- accel/accel.sh@19 -- # read -r var val 00:18:59.977 15:37:30 -- accel/accel.sh@20 -- # val= 00:18:59.977 15:37:30 -- accel/accel.sh@21 -- # case "$var" in 00:18:59.977 15:37:30 -- accel/accel.sh@19 -- # IFS=: 00:18:59.977 15:37:30 -- accel/accel.sh@19 -- # read -r var val 00:18:59.977 15:37:30 -- accel/accel.sh@20 -- # val= 00:18:59.977 15:37:30 -- accel/accel.sh@21 -- # case "$var" in 00:18:59.977 15:37:30 -- accel/accel.sh@19 -- # IFS=: 00:18:59.977 15:37:30 -- accel/accel.sh@19 -- # read -r var val 00:18:59.977 15:37:30 -- accel/accel.sh@20 -- # val=0x1 00:18:59.977 15:37:30 -- accel/accel.sh@21 -- # case "$var" in 00:18:59.977 15:37:30 -- accel/accel.sh@19 -- # IFS=: 00:18:59.977 15:37:30 -- accel/accel.sh@19 -- # read -r var val 00:18:59.977 15:37:30 -- accel/accel.sh@20 -- # val= 00:18:59.977 15:37:30 -- accel/accel.sh@21 -- # case "$var" in 00:18:59.977 15:37:30 -- accel/accel.sh@19 -- # IFS=: 00:18:59.977 15:37:30 -- accel/accel.sh@19 -- # read -r var val 00:18:59.977 15:37:30 -- accel/accel.sh@20 -- # val= 00:18:59.977 15:37:30 -- accel/accel.sh@21 -- # case "$var" in 00:18:59.977 15:37:30 -- accel/accel.sh@19 -- # IFS=: 00:18:59.977 15:37:30 -- accel/accel.sh@19 -- # read -r var val 00:18:59.977 15:37:30 -- accel/accel.sh@20 -- # val=decompress 00:18:59.977 15:37:30 -- accel/accel.sh@21 -- # case "$var" in 00:18:59.977 15:37:30 -- accel/accel.sh@23 -- # accel_opc=decompress 00:18:59.977 15:37:30 -- accel/accel.sh@19 -- # IFS=: 00:18:59.977 15:37:30 -- accel/accel.sh@19 -- # read -r var val 00:18:59.977 15:37:30 -- accel/accel.sh@20 -- # val='4096 bytes' 00:18:59.977 15:37:30 -- accel/accel.sh@21 -- # case "$var" in 00:18:59.977 15:37:30 -- accel/accel.sh@19 -- # IFS=: 00:18:59.977 15:37:30 -- accel/accel.sh@19 -- # read -r var val 00:18:59.977 15:37:30 -- accel/accel.sh@20 -- # val= 00:18:59.977 15:37:30 -- accel/accel.sh@21 -- # case "$var" in 00:18:59.977 15:37:30 -- accel/accel.sh@19 -- # IFS=: 00:18:59.977 15:37:30 -- accel/accel.sh@19 -- # read -r var val 00:18:59.977 15:37:30 -- accel/accel.sh@20 -- # val=software 00:18:59.977 15:37:30 -- accel/accel.sh@21 -- # case "$var" in 00:18:59.977 15:37:30 -- accel/accel.sh@22 -- # accel_module=software 00:18:59.977 15:37:30 -- accel/accel.sh@19 -- # IFS=: 00:18:59.977 15:37:30 -- accel/accel.sh@19 -- # read -r var val 00:18:59.977 15:37:30 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:18:59.977 15:37:30 -- accel/accel.sh@21 -- # case "$var" in 00:18:59.977 15:37:30 -- accel/accel.sh@19 -- # IFS=: 00:18:59.977 15:37:30 -- accel/accel.sh@19 -- # read -r var val 00:18:59.977 15:37:30 -- accel/accel.sh@20 -- # val=32 00:18:59.977 15:37:30 -- accel/accel.sh@21 -- # case "$var" in 00:18:59.977 15:37:30 -- accel/accel.sh@19 -- # IFS=: 00:18:59.977 15:37:30 -- accel/accel.sh@19 -- # read -r var val 00:18:59.977 15:37:30 -- accel/accel.sh@20 -- # val=32 00:18:59.977 15:37:30 -- accel/accel.sh@21 -- # case "$var" in 00:18:59.977 15:37:30 -- accel/accel.sh@19 -- # IFS=: 00:18:59.977 15:37:30 -- accel/accel.sh@19 -- # read -r var val 00:18:59.977 15:37:30 -- accel/accel.sh@20 -- # val=2 00:18:59.977 15:37:30 -- accel/accel.sh@21 -- # case "$var" in 00:18:59.977 15:37:30 -- accel/accel.sh@19 -- # IFS=: 00:18:59.977 15:37:30 -- accel/accel.sh@19 -- # read -r var val 00:18:59.977 15:37:30 -- accel/accel.sh@20 -- # val='1 seconds' 00:18:59.977 15:37:30 -- accel/accel.sh@21 -- # case "$var" in 00:18:59.977 15:37:30 -- accel/accel.sh@19 -- # IFS=: 00:18:59.977 15:37:30 -- accel/accel.sh@19 -- # read -r var val 00:18:59.977 15:37:30 -- accel/accel.sh@20 -- # val=Yes 00:18:59.977 15:37:30 -- accel/accel.sh@21 -- # case "$var" in 00:18:59.977 15:37:30 -- accel/accel.sh@19 -- # IFS=: 00:18:59.977 15:37:30 -- accel/accel.sh@19 -- # read -r var val 00:18:59.977 15:37:30 -- accel/accel.sh@20 -- # val= 00:18:59.977 15:37:30 -- accel/accel.sh@21 -- # case "$var" in 00:18:59.977 15:37:30 -- accel/accel.sh@19 -- # IFS=: 00:18:59.977 15:37:30 -- accel/accel.sh@19 -- # read -r var val 00:18:59.977 15:37:30 -- accel/accel.sh@20 -- # val= 00:18:59.977 15:37:30 -- accel/accel.sh@21 -- # case "$var" in 00:18:59.977 15:37:30 -- accel/accel.sh@19 -- # IFS=: 00:18:59.977 15:37:30 -- accel/accel.sh@19 -- # read -r var val 00:19:01.350 15:37:31 -- accel/accel.sh@20 -- # val= 00:19:01.350 15:37:31 -- accel/accel.sh@21 -- # case "$var" in 00:19:01.350 15:37:31 -- accel/accel.sh@19 -- # IFS=: 00:19:01.350 15:37:31 -- accel/accel.sh@19 -- # read -r var val 00:19:01.350 15:37:31 -- accel/accel.sh@20 -- # val= 00:19:01.350 15:37:31 -- accel/accel.sh@21 -- # case "$var" in 00:19:01.350 15:37:31 -- accel/accel.sh@19 -- # IFS=: 00:19:01.350 15:37:31 -- accel/accel.sh@19 -- # read -r var val 00:19:01.350 15:37:31 -- accel/accel.sh@20 -- # val= 00:19:01.350 15:37:31 -- accel/accel.sh@21 -- # case "$var" in 00:19:01.350 15:37:31 -- accel/accel.sh@19 -- # IFS=: 00:19:01.350 15:37:31 -- accel/accel.sh@19 -- # read -r var val 00:19:01.350 15:37:31 -- accel/accel.sh@20 -- # val= 00:19:01.350 15:37:31 -- accel/accel.sh@21 -- # case "$var" in 00:19:01.350 15:37:31 -- accel/accel.sh@19 -- # IFS=: 00:19:01.350 15:37:31 -- accel/accel.sh@19 -- # read -r var val 00:19:01.350 15:37:31 -- accel/accel.sh@20 -- # val= 00:19:01.350 15:37:31 -- accel/accel.sh@21 -- # case "$var" in 00:19:01.350 15:37:31 -- accel/accel.sh@19 -- # IFS=: 00:19:01.350 15:37:31 -- accel/accel.sh@19 -- # read -r var val 00:19:01.350 15:37:31 -- accel/accel.sh@20 -- # val= 00:19:01.350 15:37:31 -- accel/accel.sh@21 -- # case "$var" in 00:19:01.350 15:37:31 -- accel/accel.sh@19 -- # IFS=: 00:19:01.350 15:37:31 -- accel/accel.sh@19 -- # read -r var val 00:19:01.350 15:37:31 -- accel/accel.sh@20 -- # val= 00:19:01.350 15:37:31 -- accel/accel.sh@21 -- # case "$var" in 00:19:01.350 15:37:31 -- accel/accel.sh@19 -- # IFS=: 00:19:01.350 15:37:31 -- accel/accel.sh@19 -- # read -r var val 00:19:01.350 15:37:31 -- accel/accel.sh@27 -- # [[ -n software ]] 00:19:01.350 15:37:31 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:19:01.350 15:37:31 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:01.350 00:19:01.350 real 0m1.593s 00:19:01.350 user 0m1.370s 00:19:01.350 sys 0m0.127s 00:19:01.350 15:37:31 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:01.350 15:37:31 -- common/autotest_common.sh@10 -- # set +x 00:19:01.350 ************************************ 00:19:01.350 END TEST accel_decomp_mthread 00:19:01.350 ************************************ 00:19:01.350 15:37:31 -- accel/accel.sh@122 -- # run_test accel_deomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:19:01.350 15:37:31 -- common/autotest_common.sh@1087 -- # '[' 13 -le 1 ']' 00:19:01.350 15:37:31 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:01.350 15:37:31 -- common/autotest_common.sh@10 -- # set +x 00:19:01.350 ************************************ 00:19:01.350 START TEST accel_deomp_full_mthread 00:19:01.350 ************************************ 00:19:01.350 15:37:31 -- common/autotest_common.sh@1111 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:19:01.350 15:37:31 -- accel/accel.sh@16 -- # local accel_opc 00:19:01.350 15:37:31 -- accel/accel.sh@17 -- # local accel_module 00:19:01.350 15:37:31 -- accel/accel.sh@19 -- # IFS=: 00:19:01.350 15:37:31 -- accel/accel.sh@19 -- # read -r var val 00:19:01.350 15:37:31 -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:19:01.350 15:37:31 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:19:01.350 15:37:31 -- accel/accel.sh@12 -- # build_accel_config 00:19:01.350 15:37:31 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:19:01.350 15:37:31 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:19:01.350 15:37:31 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:19:01.350 15:37:31 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:19:01.350 15:37:31 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:19:01.350 15:37:31 -- accel/accel.sh@40 -- # local IFS=, 00:19:01.350 15:37:31 -- accel/accel.sh@41 -- # jq -r . 00:19:01.350 [2024-04-26 15:37:31.446388] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:19:01.350 [2024-04-26 15:37:31.446474] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64231 ] 00:19:01.350 [2024-04-26 15:37:31.586738] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.608 [2024-04-26 15:37:31.697855] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:01.608 15:37:31 -- accel/accel.sh@20 -- # val= 00:19:01.608 15:37:31 -- accel/accel.sh@21 -- # case "$var" in 00:19:01.608 15:37:31 -- accel/accel.sh@19 -- # IFS=: 00:19:01.608 15:37:31 -- accel/accel.sh@19 -- # read -r var val 00:19:01.608 15:37:31 -- accel/accel.sh@20 -- # val= 00:19:01.608 15:37:31 -- accel/accel.sh@21 -- # case "$var" in 00:19:01.608 15:37:31 -- accel/accel.sh@19 -- # IFS=: 00:19:01.608 15:37:31 -- accel/accel.sh@19 -- # read -r var val 00:19:01.608 15:37:31 -- accel/accel.sh@20 -- # val= 00:19:01.608 15:37:31 -- accel/accel.sh@21 -- # case "$var" in 00:19:01.608 15:37:31 -- accel/accel.sh@19 -- # IFS=: 00:19:01.608 15:37:31 -- accel/accel.sh@19 -- # read -r var val 00:19:01.608 15:37:31 -- accel/accel.sh@20 -- # val=0x1 00:19:01.608 15:37:31 -- accel/accel.sh@21 -- # case "$var" in 00:19:01.608 15:37:31 -- accel/accel.sh@19 -- # IFS=: 00:19:01.608 15:37:31 -- accel/accel.sh@19 -- # read -r var val 00:19:01.608 15:37:31 -- accel/accel.sh@20 -- # val= 00:19:01.608 15:37:31 -- accel/accel.sh@21 -- # case "$var" in 00:19:01.608 15:37:31 -- accel/accel.sh@19 -- # IFS=: 00:19:01.608 15:37:31 -- accel/accel.sh@19 -- # read -r var val 00:19:01.608 15:37:31 -- accel/accel.sh@20 -- # val= 00:19:01.608 15:37:31 -- accel/accel.sh@21 -- # case "$var" in 00:19:01.608 15:37:31 -- accel/accel.sh@19 -- # IFS=: 00:19:01.608 15:37:31 -- accel/accel.sh@19 -- # read -r var val 00:19:01.608 15:37:31 -- accel/accel.sh@20 -- # val=decompress 00:19:01.608 15:37:31 -- accel/accel.sh@21 -- # case "$var" in 00:19:01.608 15:37:31 -- accel/accel.sh@23 -- # accel_opc=decompress 00:19:01.608 15:37:31 -- accel/accel.sh@19 -- # IFS=: 00:19:01.608 15:37:31 -- accel/accel.sh@19 -- # read -r var val 00:19:01.608 15:37:31 -- accel/accel.sh@20 -- # val='111250 bytes' 00:19:01.608 15:37:31 -- accel/accel.sh@21 -- # case "$var" in 00:19:01.608 15:37:31 -- accel/accel.sh@19 -- # IFS=: 00:19:01.608 15:37:31 -- accel/accel.sh@19 -- # read -r var val 00:19:01.608 15:37:31 -- accel/accel.sh@20 -- # val= 00:19:01.608 15:37:31 -- accel/accel.sh@21 -- # case "$var" in 00:19:01.608 15:37:31 -- accel/accel.sh@19 -- # IFS=: 00:19:01.608 15:37:31 -- accel/accel.sh@19 -- # read -r var val 00:19:01.608 15:37:31 -- accel/accel.sh@20 -- # val=software 00:19:01.608 15:37:31 -- accel/accel.sh@21 -- # case "$var" in 00:19:01.608 15:37:31 -- accel/accel.sh@22 -- # accel_module=software 00:19:01.608 15:37:31 -- accel/accel.sh@19 -- # IFS=: 00:19:01.608 15:37:31 -- accel/accel.sh@19 -- # read -r var val 00:19:01.608 15:37:31 -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:19:01.608 15:37:31 -- accel/accel.sh@21 -- # case "$var" in 00:19:01.608 15:37:31 -- accel/accel.sh@19 -- # IFS=: 00:19:01.608 15:37:31 -- accel/accel.sh@19 -- # read -r var val 00:19:01.608 15:37:31 -- accel/accel.sh@20 -- # val=32 00:19:01.608 15:37:31 -- accel/accel.sh@21 -- # case "$var" in 00:19:01.608 15:37:31 -- accel/accel.sh@19 -- # IFS=: 00:19:01.608 15:37:31 -- accel/accel.sh@19 -- # read -r var val 00:19:01.608 15:37:31 -- accel/accel.sh@20 -- # val=32 00:19:01.608 15:37:31 -- accel/accel.sh@21 -- # case "$var" in 00:19:01.608 15:37:31 -- accel/accel.sh@19 -- # IFS=: 00:19:01.608 15:37:31 -- accel/accel.sh@19 -- # read -r var val 00:19:01.608 15:37:31 -- accel/accel.sh@20 -- # val=2 00:19:01.608 15:37:31 -- accel/accel.sh@21 -- # case "$var" in 00:19:01.608 15:37:31 -- accel/accel.sh@19 -- # IFS=: 00:19:01.608 15:37:31 -- accel/accel.sh@19 -- # read -r var val 00:19:01.608 15:37:31 -- accel/accel.sh@20 -- # val='1 seconds' 00:19:01.608 15:37:31 -- accel/accel.sh@21 -- # case "$var" in 00:19:01.608 15:37:31 -- accel/accel.sh@19 -- # IFS=: 00:19:01.608 15:37:31 -- accel/accel.sh@19 -- # read -r var val 00:19:01.608 15:37:31 -- accel/accel.sh@20 -- # val=Yes 00:19:01.608 15:37:31 -- accel/accel.sh@21 -- # case "$var" in 00:19:01.608 15:37:31 -- accel/accel.sh@19 -- # IFS=: 00:19:01.608 15:37:31 -- accel/accel.sh@19 -- # read -r var val 00:19:01.608 15:37:31 -- accel/accel.sh@20 -- # val= 00:19:01.608 15:37:31 -- accel/accel.sh@21 -- # case "$var" in 00:19:01.608 15:37:31 -- accel/accel.sh@19 -- # IFS=: 00:19:01.608 15:37:31 -- accel/accel.sh@19 -- # read -r var val 00:19:01.608 15:37:31 -- accel/accel.sh@20 -- # val= 00:19:01.608 15:37:31 -- accel/accel.sh@21 -- # case "$var" in 00:19:01.608 15:37:31 -- accel/accel.sh@19 -- # IFS=: 00:19:01.608 15:37:31 -- accel/accel.sh@19 -- # read -r var val 00:19:02.982 15:37:32 -- accel/accel.sh@20 -- # val= 00:19:02.982 15:37:32 -- accel/accel.sh@21 -- # case "$var" in 00:19:02.982 15:37:32 -- accel/accel.sh@19 -- # IFS=: 00:19:02.982 15:37:32 -- accel/accel.sh@19 -- # read -r var val 00:19:02.982 15:37:32 -- accel/accel.sh@20 -- # val= 00:19:02.982 15:37:32 -- accel/accel.sh@21 -- # case "$var" in 00:19:02.982 15:37:32 -- accel/accel.sh@19 -- # IFS=: 00:19:02.982 15:37:32 -- accel/accel.sh@19 -- # read -r var val 00:19:02.982 15:37:32 -- accel/accel.sh@20 -- # val= 00:19:02.982 15:37:32 -- accel/accel.sh@21 -- # case "$var" in 00:19:02.982 15:37:32 -- accel/accel.sh@19 -- # IFS=: 00:19:02.982 15:37:32 -- accel/accel.sh@19 -- # read -r var val 00:19:02.982 15:37:32 -- accel/accel.sh@20 -- # val= 00:19:02.982 15:37:32 -- accel/accel.sh@21 -- # case "$var" in 00:19:02.982 15:37:32 -- accel/accel.sh@19 -- # IFS=: 00:19:02.982 15:37:32 -- accel/accel.sh@19 -- # read -r var val 00:19:02.982 15:37:32 -- accel/accel.sh@20 -- # val= 00:19:02.982 15:37:32 -- accel/accel.sh@21 -- # case "$var" in 00:19:02.982 15:37:32 -- accel/accel.sh@19 -- # IFS=: 00:19:02.982 15:37:32 -- accel/accel.sh@19 -- # read -r var val 00:19:02.982 15:37:32 -- accel/accel.sh@20 -- # val= 00:19:02.982 15:37:32 -- accel/accel.sh@21 -- # case "$var" in 00:19:02.982 15:37:32 -- accel/accel.sh@19 -- # IFS=: 00:19:02.982 15:37:32 -- accel/accel.sh@19 -- # read -r var val 00:19:02.982 15:37:32 -- accel/accel.sh@20 -- # val= 00:19:02.982 15:37:32 -- accel/accel.sh@21 -- # case "$var" in 00:19:02.982 15:37:32 -- accel/accel.sh@19 -- # IFS=: 00:19:02.982 15:37:32 -- accel/accel.sh@19 -- # read -r var val 00:19:02.982 15:37:32 -- accel/accel.sh@27 -- # [[ -n software ]] 00:19:02.982 15:37:32 -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:19:02.982 15:37:32 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:19:02.982 ************************************ 00:19:02.982 END TEST accel_deomp_full_mthread 00:19:02.982 ************************************ 00:19:02.982 00:19:02.982 real 0m1.569s 00:19:02.982 user 0m1.360s 00:19:02.982 sys 0m0.114s 00:19:02.982 15:37:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:02.982 15:37:32 -- common/autotest_common.sh@10 -- # set +x 00:19:02.982 15:37:33 -- accel/accel.sh@124 -- # [[ n == y ]] 00:19:02.982 15:37:33 -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:19:02.982 15:37:33 -- accel/accel.sh@137 -- # build_accel_config 00:19:02.982 15:37:33 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:19:02.982 15:37:33 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:19:02.982 15:37:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:02.982 15:37:33 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:19:02.982 15:37:33 -- common/autotest_common.sh@10 -- # set +x 00:19:02.982 15:37:33 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:19:02.982 15:37:33 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:19:02.982 15:37:33 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:19:02.982 15:37:33 -- accel/accel.sh@40 -- # local IFS=, 00:19:02.982 15:37:33 -- accel/accel.sh@41 -- # jq -r . 00:19:02.982 ************************************ 00:19:02.982 START TEST accel_dif_functional_tests 00:19:02.982 ************************************ 00:19:02.982 15:37:33 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /dev/fd/62 00:19:02.982 [2024-04-26 15:37:33.150622] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:19:02.982 [2024-04-26 15:37:33.150735] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64270 ] 00:19:03.240 [2024-04-26 15:37:33.290653] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:03.240 [2024-04-26 15:37:33.431132] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:03.240 [2024-04-26 15:37:33.431286] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:03.240 [2024-04-26 15:37:33.431297] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:03.240 00:19:03.240 00:19:03.240 CUnit - A unit testing framework for C - Version 2.1-3 00:19:03.240 http://cunit.sourceforge.net/ 00:19:03.240 00:19:03.240 00:19:03.240 Suite: accel_dif 00:19:03.240 Test: verify: DIF generated, GUARD check ...passed 00:19:03.240 Test: verify: DIF generated, APPTAG check ...passed 00:19:03.240 Test: verify: DIF generated, REFTAG check ...passed 00:19:03.240 Test: verify: DIF not generated, GUARD check ...passed 00:19:03.240 Test: verify: DIF not generated, APPTAG check ...[2024-04-26 15:37:33.530028] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:19:03.240 [2024-04-26 15:37:33.530220] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:19:03.240 [2024-04-26 15:37:33.530263] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:19:03.240 [2024-04-26 15:37:33.530294] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:19:03.240 passed 00:19:03.240 Test: verify: DIF not generated, REFTAG check ...passed 00:19:03.240 Test: verify: APPTAG correct, APPTAG check ...passed 00:19:03.240 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:19:03.240 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:19:03.240 Test: verify: REFTAG incorrect, REFTAG ignore ...[2024-04-26 15:37:33.530322] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:19:03.240 [2024-04-26 15:37:33.530350] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:19:03.240 [2024-04-26 15:37:33.530415] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:19:03.240 passed 00:19:03.240 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:19:03.240 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-04-26 15:37:33.530840] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:19:03.240 passed 00:19:03.240 Test: generate copy: DIF generated, GUARD check ...passed 00:19:03.240 Test: generate copy: DIF generated, APTTAG check ...passed 00:19:03.240 Test: generate copy: DIF generated, REFTAG check ...passed 00:19:03.240 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:19:03.240 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:19:03.240 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:19:03.240 Test: generate copy: iovecs-len validate ...passed 00:19:03.240 Test: generate copy: buffer alignment validate ...[2024-04-26 15:37:33.531457] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:19:03.240 passed 00:19:03.240 00:19:03.240 Run Summary: Type Total Ran Passed Failed Inactive 00:19:03.240 suites 1 1 n/a 0 0 00:19:03.240 tests 20 20 20 0 0 00:19:03.240 asserts 204 204 204 0 n/a 00:19:03.240 00:19:03.240 Elapsed time = 0.005 seconds 00:19:03.497 00:19:03.497 real 0m0.678s 00:19:03.497 user 0m0.849s 00:19:03.497 sys 0m0.158s 00:19:03.497 15:37:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:03.497 ************************************ 00:19:03.497 END TEST accel_dif_functional_tests 00:19:03.497 15:37:33 -- common/autotest_common.sh@10 -- # set +x 00:19:03.497 ************************************ 00:19:03.771 00:19:03.771 real 0m37.439s 00:19:03.771 user 0m38.261s 00:19:03.771 sys 0m4.696s 00:19:03.771 ************************************ 00:19:03.771 END TEST accel 00:19:03.771 ************************************ 00:19:03.771 15:37:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:03.771 15:37:33 -- common/autotest_common.sh@10 -- # set +x 00:19:03.771 15:37:33 -- spdk/autotest.sh@180 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:19:03.771 15:37:33 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:03.771 15:37:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:03.771 15:37:33 -- common/autotest_common.sh@10 -- # set +x 00:19:03.771 ************************************ 00:19:03.771 START TEST accel_rpc 00:19:03.771 ************************************ 00:19:03.771 15:37:33 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:19:03.771 * Looking for test storage... 00:19:03.771 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:19:03.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:03.771 15:37:34 -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:19:03.771 15:37:34 -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=64341 00:19:03.771 15:37:34 -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:19:03.771 15:37:34 -- accel/accel_rpc.sh@15 -- # waitforlisten 64341 00:19:03.771 15:37:34 -- common/autotest_common.sh@817 -- # '[' -z 64341 ']' 00:19:03.771 15:37:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:03.771 15:37:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:03.772 15:37:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:03.772 15:37:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:03.772 15:37:34 -- common/autotest_common.sh@10 -- # set +x 00:19:04.029 [2024-04-26 15:37:34.080427] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:19:04.029 [2024-04-26 15:37:34.080736] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64341 ] 00:19:04.029 [2024-04-26 15:37:34.217415] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:04.287 [2024-04-26 15:37:34.353109] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:04.852 15:37:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:04.852 15:37:35 -- common/autotest_common.sh@850 -- # return 0 00:19:04.852 15:37:35 -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:19:04.852 15:37:35 -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:19:04.852 15:37:35 -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:19:04.852 15:37:35 -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:19:04.852 15:37:35 -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:19:04.852 15:37:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:04.852 15:37:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:04.852 15:37:35 -- common/autotest_common.sh@10 -- # set +x 00:19:04.852 ************************************ 00:19:04.852 START TEST accel_assign_opcode 00:19:04.852 ************************************ 00:19:04.852 15:37:35 -- common/autotest_common.sh@1111 -- # accel_assign_opcode_test_suite 00:19:04.852 15:37:35 -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:19:04.852 15:37:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:04.852 15:37:35 -- common/autotest_common.sh@10 -- # set +x 00:19:04.852 [2024-04-26 15:37:35.109979] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:19:04.852 15:37:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:04.852 15:37:35 -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:19:04.852 15:37:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:04.852 15:37:35 -- common/autotest_common.sh@10 -- # set +x 00:19:04.852 [2024-04-26 15:37:35.121961] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:19:04.852 15:37:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:04.852 15:37:35 -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:19:04.852 15:37:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:04.852 15:37:35 -- common/autotest_common.sh@10 -- # set +x 00:19:05.110 15:37:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:05.110 15:37:35 -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:19:05.110 15:37:35 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:05.110 15:37:35 -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:19:05.110 15:37:35 -- common/autotest_common.sh@10 -- # set +x 00:19:05.110 15:37:35 -- accel/accel_rpc.sh@42 -- # grep software 00:19:05.110 15:37:35 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:05.368 software 00:19:05.368 00:19:05.368 real 0m0.320s 00:19:05.368 user 0m0.060s 00:19:05.368 sys 0m0.011s 00:19:05.368 ************************************ 00:19:05.368 END TEST accel_assign_opcode 00:19:05.368 ************************************ 00:19:05.368 15:37:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:05.368 15:37:35 -- common/autotest_common.sh@10 -- # set +x 00:19:05.368 15:37:35 -- accel/accel_rpc.sh@55 -- # killprocess 64341 00:19:05.368 15:37:35 -- common/autotest_common.sh@936 -- # '[' -z 64341 ']' 00:19:05.368 15:37:35 -- common/autotest_common.sh@940 -- # kill -0 64341 00:19:05.368 15:37:35 -- common/autotest_common.sh@941 -- # uname 00:19:05.368 15:37:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:05.368 15:37:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64341 00:19:05.368 killing process with pid 64341 00:19:05.368 15:37:35 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:05.368 15:37:35 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:05.368 15:37:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64341' 00:19:05.368 15:37:35 -- common/autotest_common.sh@955 -- # kill 64341 00:19:05.368 15:37:35 -- common/autotest_common.sh@960 -- # wait 64341 00:19:05.626 00:19:05.626 real 0m1.980s 00:19:05.626 user 0m2.073s 00:19:05.626 sys 0m0.477s 00:19:05.626 15:37:35 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:05.626 ************************************ 00:19:05.626 END TEST accel_rpc 00:19:05.626 ************************************ 00:19:05.626 15:37:35 -- common/autotest_common.sh@10 -- # set +x 00:19:05.884 15:37:35 -- spdk/autotest.sh@181 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:19:05.884 15:37:35 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:05.884 15:37:35 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:05.884 15:37:35 -- common/autotest_common.sh@10 -- # set +x 00:19:05.884 ************************************ 00:19:05.884 START TEST app_cmdline 00:19:05.884 ************************************ 00:19:05.884 15:37:36 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:19:05.884 * Looking for test storage... 00:19:05.884 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:19:05.884 15:37:36 -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:19:05.884 15:37:36 -- app/cmdline.sh@17 -- # spdk_tgt_pid=64461 00:19:05.884 15:37:36 -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:19:05.884 15:37:36 -- app/cmdline.sh@18 -- # waitforlisten 64461 00:19:05.884 15:37:36 -- common/autotest_common.sh@817 -- # '[' -z 64461 ']' 00:19:05.884 15:37:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:05.884 15:37:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:05.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:05.884 15:37:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:05.884 15:37:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:05.884 15:37:36 -- common/autotest_common.sh@10 -- # set +x 00:19:05.884 [2024-04-26 15:37:36.158222] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:19:05.884 [2024-04-26 15:37:36.158366] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64461 ] 00:19:06.142 [2024-04-26 15:37:36.292951] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.142 [2024-04-26 15:37:36.418591] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:07.075 15:37:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:07.075 15:37:37 -- common/autotest_common.sh@850 -- # return 0 00:19:07.075 15:37:37 -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:19:07.075 { 00:19:07.075 "fields": { 00:19:07.075 "commit": "2971e8ff3", 00:19:07.075 "major": 24, 00:19:07.075 "minor": 5, 00:19:07.075 "patch": 0, 00:19:07.075 "suffix": "-pre" 00:19:07.075 }, 00:19:07.075 "version": "SPDK v24.05-pre git sha1 2971e8ff3" 00:19:07.075 } 00:19:07.333 15:37:37 -- app/cmdline.sh@22 -- # expected_methods=() 00:19:07.333 15:37:37 -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:19:07.333 15:37:37 -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:19:07.333 15:37:37 -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:19:07.333 15:37:37 -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:19:07.333 15:37:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:07.333 15:37:37 -- common/autotest_common.sh@10 -- # set +x 00:19:07.333 15:37:37 -- app/cmdline.sh@26 -- # jq -r '.[]' 00:19:07.333 15:37:37 -- app/cmdline.sh@26 -- # sort 00:19:07.333 15:37:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:07.333 15:37:37 -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:19:07.333 15:37:37 -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:19:07.333 15:37:37 -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:19:07.333 15:37:37 -- common/autotest_common.sh@638 -- # local es=0 00:19:07.333 15:37:37 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:19:07.333 15:37:37 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:07.333 15:37:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:07.333 15:37:37 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:07.333 15:37:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:07.333 15:37:37 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:07.333 15:37:37 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:19:07.333 15:37:37 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:07.333 15:37:37 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:19:07.333 15:37:37 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:19:07.591 2024/04/26 15:37:37 error on JSON-RPC call, method: env_dpdk_get_mem_stats, params: map[], err: error received for env_dpdk_get_mem_stats method, err: Code=-32601 Msg=Method not found 00:19:07.591 request: 00:19:07.591 { 00:19:07.591 "method": "env_dpdk_get_mem_stats", 00:19:07.591 "params": {} 00:19:07.591 } 00:19:07.591 Got JSON-RPC error response 00:19:07.591 GoRPCClient: error on JSON-RPC call 00:19:07.591 15:37:37 -- common/autotest_common.sh@641 -- # es=1 00:19:07.591 15:37:37 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:19:07.591 15:37:37 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:19:07.591 15:37:37 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:19:07.591 15:37:37 -- app/cmdline.sh@1 -- # killprocess 64461 00:19:07.591 15:37:37 -- common/autotest_common.sh@936 -- # '[' -z 64461 ']' 00:19:07.591 15:37:37 -- common/autotest_common.sh@940 -- # kill -0 64461 00:19:07.591 15:37:37 -- common/autotest_common.sh@941 -- # uname 00:19:07.591 15:37:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:07.591 15:37:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64461 00:19:07.591 killing process with pid 64461 00:19:07.591 15:37:37 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:07.591 15:37:37 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:07.591 15:37:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64461' 00:19:07.591 15:37:37 -- common/autotest_common.sh@955 -- # kill 64461 00:19:07.591 15:37:37 -- common/autotest_common.sh@960 -- # wait 64461 00:19:08.156 00:19:08.156 real 0m2.148s 00:19:08.156 user 0m2.659s 00:19:08.156 sys 0m0.495s 00:19:08.156 15:37:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:08.156 15:37:38 -- common/autotest_common.sh@10 -- # set +x 00:19:08.156 ************************************ 00:19:08.156 END TEST app_cmdline 00:19:08.156 ************************************ 00:19:08.156 15:37:38 -- spdk/autotest.sh@182 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:19:08.156 15:37:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:19:08.156 15:37:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:08.156 15:37:38 -- common/autotest_common.sh@10 -- # set +x 00:19:08.156 ************************************ 00:19:08.156 START TEST version 00:19:08.156 ************************************ 00:19:08.156 15:37:38 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:19:08.156 * Looking for test storage... 00:19:08.156 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:19:08.156 15:37:38 -- app/version.sh@17 -- # get_header_version major 00:19:08.156 15:37:38 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:19:08.156 15:37:38 -- app/version.sh@14 -- # cut -f2 00:19:08.156 15:37:38 -- app/version.sh@14 -- # tr -d '"' 00:19:08.156 15:37:38 -- app/version.sh@17 -- # major=24 00:19:08.156 15:37:38 -- app/version.sh@18 -- # get_header_version minor 00:19:08.156 15:37:38 -- app/version.sh@14 -- # tr -d '"' 00:19:08.156 15:37:38 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:19:08.156 15:37:38 -- app/version.sh@14 -- # cut -f2 00:19:08.156 15:37:38 -- app/version.sh@18 -- # minor=5 00:19:08.156 15:37:38 -- app/version.sh@19 -- # get_header_version patch 00:19:08.156 15:37:38 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:19:08.156 15:37:38 -- app/version.sh@14 -- # cut -f2 00:19:08.156 15:37:38 -- app/version.sh@14 -- # tr -d '"' 00:19:08.156 15:37:38 -- app/version.sh@19 -- # patch=0 00:19:08.156 15:37:38 -- app/version.sh@20 -- # get_header_version suffix 00:19:08.156 15:37:38 -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:19:08.156 15:37:38 -- app/version.sh@14 -- # cut -f2 00:19:08.156 15:37:38 -- app/version.sh@14 -- # tr -d '"' 00:19:08.156 15:37:38 -- app/version.sh@20 -- # suffix=-pre 00:19:08.156 15:37:38 -- app/version.sh@22 -- # version=24.5 00:19:08.156 15:37:38 -- app/version.sh@25 -- # (( patch != 0 )) 00:19:08.156 15:37:38 -- app/version.sh@28 -- # version=24.5rc0 00:19:08.156 15:37:38 -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:19:08.156 15:37:38 -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:19:08.156 15:37:38 -- app/version.sh@30 -- # py_version=24.5rc0 00:19:08.156 15:37:38 -- app/version.sh@31 -- # [[ 24.5rc0 == \2\4\.\5\r\c\0 ]] 00:19:08.156 00:19:08.156 real 0m0.159s 00:19:08.156 user 0m0.086s 00:19:08.156 sys 0m0.100s 00:19:08.156 15:37:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:08.156 15:37:38 -- common/autotest_common.sh@10 -- # set +x 00:19:08.156 ************************************ 00:19:08.156 END TEST version 00:19:08.156 ************************************ 00:19:08.414 15:37:38 -- spdk/autotest.sh@184 -- # '[' 0 -eq 1 ']' 00:19:08.414 15:37:38 -- spdk/autotest.sh@194 -- # uname -s 00:19:08.414 15:37:38 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:19:08.414 15:37:38 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:19:08.414 15:37:38 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:19:08.414 15:37:38 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:19:08.414 15:37:38 -- spdk/autotest.sh@254 -- # '[' 0 -eq 1 ']' 00:19:08.414 15:37:38 -- spdk/autotest.sh@258 -- # timing_exit lib 00:19:08.414 15:37:38 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:08.414 15:37:38 -- common/autotest_common.sh@10 -- # set +x 00:19:08.414 15:37:38 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:19:08.414 15:37:38 -- spdk/autotest.sh@268 -- # '[' 0 -eq 1 ']' 00:19:08.414 15:37:38 -- spdk/autotest.sh@277 -- # '[' 1 -eq 1 ']' 00:19:08.414 15:37:38 -- spdk/autotest.sh@278 -- # export NET_TYPE 00:19:08.414 15:37:38 -- spdk/autotest.sh@281 -- # '[' tcp = rdma ']' 00:19:08.414 15:37:38 -- spdk/autotest.sh@284 -- # '[' tcp = tcp ']' 00:19:08.414 15:37:38 -- spdk/autotest.sh@285 -- # run_test nvmf_tcp /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:19:08.414 15:37:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:08.414 15:37:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:08.414 15:37:38 -- common/autotest_common.sh@10 -- # set +x 00:19:08.414 ************************************ 00:19:08.414 START TEST nvmf_tcp 00:19:08.414 ************************************ 00:19:08.414 15:37:38 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/nvmf.sh --transport=tcp 00:19:08.414 * Looking for test storage... 00:19:08.414 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf 00:19:08.414 15:37:38 -- nvmf/nvmf.sh@10 -- # uname -s 00:19:08.414 15:37:38 -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:19:08.414 15:37:38 -- nvmf/nvmf.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:08.414 15:37:38 -- nvmf/common.sh@7 -- # uname -s 00:19:08.414 15:37:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:08.414 15:37:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:08.414 15:37:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:08.414 15:37:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:08.414 15:37:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:08.414 15:37:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:08.414 15:37:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:08.414 15:37:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:08.414 15:37:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:08.414 15:37:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:08.414 15:37:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:19:08.414 15:37:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:19:08.414 15:37:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:08.414 15:37:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:08.414 15:37:38 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:08.414 15:37:38 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:08.414 15:37:38 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:08.414 15:37:38 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:08.414 15:37:38 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:08.414 15:37:38 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:08.415 15:37:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.415 15:37:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.415 15:37:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.415 15:37:38 -- paths/export.sh@5 -- # export PATH 00:19:08.415 15:37:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.415 15:37:38 -- nvmf/common.sh@47 -- # : 0 00:19:08.415 15:37:38 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:08.415 15:37:38 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:08.415 15:37:38 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:08.415 15:37:38 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:08.415 15:37:38 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:08.415 15:37:38 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:08.415 15:37:38 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:08.415 15:37:38 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:08.415 15:37:38 -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:19:08.415 15:37:38 -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:19:08.415 15:37:38 -- nvmf/nvmf.sh@20 -- # timing_enter target 00:19:08.415 15:37:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:08.415 15:37:38 -- common/autotest_common.sh@10 -- # set +x 00:19:08.415 15:37:38 -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:19:08.415 15:37:38 -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:19:08.415 15:37:38 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:08.415 15:37:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:08.415 15:37:38 -- common/autotest_common.sh@10 -- # set +x 00:19:08.673 ************************************ 00:19:08.673 START TEST nvmf_example 00:19:08.673 ************************************ 00:19:08.673 15:37:38 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:19:08.673 * Looking for test storage... 00:19:08.673 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:08.673 15:37:38 -- target/nvmf_example.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:08.673 15:37:38 -- nvmf/common.sh@7 -- # uname -s 00:19:08.673 15:37:38 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:08.673 15:37:38 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:08.673 15:37:38 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:08.673 15:37:38 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:08.673 15:37:38 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:08.673 15:37:38 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:08.673 15:37:38 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:08.673 15:37:38 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:08.673 15:37:38 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:08.673 15:37:38 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:08.673 15:37:38 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:19:08.673 15:37:38 -- nvmf/common.sh@18 -- # NVME_HOSTID=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:19:08.673 15:37:38 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:08.673 15:37:38 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:08.673 15:37:38 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:08.673 15:37:38 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:08.673 15:37:38 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:08.673 15:37:38 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:08.673 15:37:38 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:08.674 15:37:38 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:08.674 15:37:38 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.674 15:37:38 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.674 15:37:38 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.674 15:37:38 -- paths/export.sh@5 -- # export PATH 00:19:08.674 15:37:38 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:08.674 15:37:38 -- nvmf/common.sh@47 -- # : 0 00:19:08.674 15:37:38 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:08.674 15:37:38 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:08.674 15:37:38 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:08.674 15:37:38 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:08.674 15:37:38 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:08.674 15:37:38 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:08.674 15:37:38 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:08.674 15:37:38 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:08.674 15:37:38 -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:19:08.674 15:37:38 -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:19:08.674 15:37:38 -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:19:08.674 15:37:38 -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:19:08.674 15:37:38 -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:19:08.674 15:37:38 -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:19:08.674 15:37:38 -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:19:08.674 15:37:38 -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:19:08.674 15:37:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:08.674 15:37:38 -- common/autotest_common.sh@10 -- # set +x 00:19:08.674 15:37:38 -- target/nvmf_example.sh@41 -- # nvmftestinit 00:19:08.674 15:37:38 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:08.674 15:37:38 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:08.674 15:37:38 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:08.674 15:37:38 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:08.674 15:37:38 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:08.674 15:37:38 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:08.674 15:37:38 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:08.674 15:37:38 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:08.674 15:37:38 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:19:08.674 15:37:38 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:19:08.674 15:37:38 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:19:08.674 15:37:38 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:19:08.674 15:37:38 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:19:08.674 15:37:38 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:19:08.674 15:37:38 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:08.674 15:37:38 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:08.674 15:37:38 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:08.674 15:37:38 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:08.674 15:37:38 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:08.674 15:37:38 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:08.674 15:37:38 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:08.674 15:37:38 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:08.674 15:37:38 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:08.674 15:37:38 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:08.674 15:37:38 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:08.674 15:37:38 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:08.674 15:37:38 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:08.674 Cannot find device "nvmf_init_br" 00:19:08.674 15:37:38 -- nvmf/common.sh@154 -- # true 00:19:08.674 15:37:38 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:08.674 Cannot find device "nvmf_tgt_br" 00:19:08.674 15:37:38 -- nvmf/common.sh@155 -- # true 00:19:08.674 15:37:38 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:08.674 Cannot find device "nvmf_tgt_br2" 00:19:08.674 15:37:38 -- nvmf/common.sh@156 -- # true 00:19:08.674 15:37:38 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:08.674 Cannot find device "nvmf_init_br" 00:19:08.674 15:37:38 -- nvmf/common.sh@157 -- # true 00:19:08.674 15:37:38 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:08.674 Cannot find device "nvmf_tgt_br" 00:19:08.674 15:37:38 -- nvmf/common.sh@158 -- # true 00:19:08.674 15:37:38 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:08.674 Cannot find device "nvmf_tgt_br2" 00:19:08.674 15:37:38 -- nvmf/common.sh@159 -- # true 00:19:08.674 15:37:38 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:08.674 Cannot find device "nvmf_br" 00:19:08.674 15:37:38 -- nvmf/common.sh@160 -- # true 00:19:08.674 15:37:38 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:08.674 Cannot find device "nvmf_init_if" 00:19:08.674 15:37:38 -- nvmf/common.sh@161 -- # true 00:19:08.674 15:37:38 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:08.932 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:08.932 15:37:38 -- nvmf/common.sh@162 -- # true 00:19:08.932 15:37:38 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:08.932 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:08.932 15:37:38 -- nvmf/common.sh@163 -- # true 00:19:08.932 15:37:38 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:08.932 15:37:38 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:08.932 15:37:39 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:08.932 15:37:39 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:08.932 15:37:39 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:08.932 15:37:39 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:08.932 15:37:39 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:08.932 15:37:39 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:08.932 15:37:39 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:08.932 15:37:39 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:08.932 15:37:39 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:08.932 15:37:39 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:08.932 15:37:39 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:08.932 15:37:39 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:08.932 15:37:39 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:08.932 15:37:39 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:08.932 15:37:39 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:08.932 15:37:39 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:08.932 15:37:39 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:08.932 15:37:39 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:08.932 15:37:39 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:08.932 15:37:39 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:08.932 15:37:39 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:08.932 15:37:39 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:09.190 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:09.190 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.074 ms 00:19:09.190 00:19:09.190 --- 10.0.0.2 ping statistics --- 00:19:09.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:09.190 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:19:09.190 15:37:39 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:09.190 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:09.190 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:19:09.190 00:19:09.190 --- 10.0.0.3 ping statistics --- 00:19:09.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:09.190 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:19:09.190 15:37:39 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:09.190 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:09.190 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:19:09.190 00:19:09.190 --- 10.0.0.1 ping statistics --- 00:19:09.190 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:09.190 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:19:09.190 15:37:39 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:09.190 15:37:39 -- nvmf/common.sh@422 -- # return 0 00:19:09.190 15:37:39 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:09.190 15:37:39 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:09.190 15:37:39 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:09.190 15:37:39 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:09.190 15:37:39 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:09.190 15:37:39 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:09.190 15:37:39 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:09.190 15:37:39 -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:19:09.190 15:37:39 -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:19:09.190 15:37:39 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:09.190 15:37:39 -- common/autotest_common.sh@10 -- # set +x 00:19:09.190 15:37:39 -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:19:09.190 15:37:39 -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:19:09.190 15:37:39 -- target/nvmf_example.sh@34 -- # nvmfpid=64843 00:19:09.190 15:37:39 -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:19:09.190 15:37:39 -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:09.190 15:37:39 -- target/nvmf_example.sh@36 -- # waitforlisten 64843 00:19:09.190 15:37:39 -- common/autotest_common.sh@817 -- # '[' -z 64843 ']' 00:19:09.190 15:37:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:09.190 15:37:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:09.190 15:37:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:09.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:09.191 15:37:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:09.191 15:37:39 -- common/autotest_common.sh@10 -- # set +x 00:19:10.123 15:37:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:10.123 15:37:40 -- common/autotest_common.sh@850 -- # return 0 00:19:10.123 15:37:40 -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:19:10.123 15:37:40 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:10.123 15:37:40 -- common/autotest_common.sh@10 -- # set +x 00:19:10.123 15:37:40 -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:10.123 15:37:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.123 15:37:40 -- common/autotest_common.sh@10 -- # set +x 00:19:10.123 15:37:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.123 15:37:40 -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:19:10.123 15:37:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.123 15:37:40 -- common/autotest_common.sh@10 -- # set +x 00:19:10.123 15:37:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.123 15:37:40 -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:19:10.123 15:37:40 -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:10.123 15:37:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.123 15:37:40 -- common/autotest_common.sh@10 -- # set +x 00:19:10.123 15:37:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.123 15:37:40 -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:19:10.123 15:37:40 -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:10.123 15:37:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.123 15:37:40 -- common/autotest_common.sh@10 -- # set +x 00:19:10.123 15:37:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.123 15:37:40 -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:10.123 15:37:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:10.123 15:37:40 -- common/autotest_common.sh@10 -- # set +x 00:19:10.123 15:37:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:10.123 15:37:40 -- target/nvmf_example.sh@59 -- # perf=/home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf 00:19:10.123 15:37:40 -- target/nvmf_example.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:22.331 Initializing NVMe Controllers 00:19:22.331 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:22.331 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:19:22.331 Initialization complete. Launching workers. 00:19:22.331 ======================================================== 00:19:22.331 Latency(us) 00:19:22.331 Device Information : IOPS MiB/s Average min max 00:19:22.331 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15510.10 60.59 4128.65 840.84 21902.28 00:19:22.331 ======================================================== 00:19:22.331 Total : 15510.10 60.59 4128.65 840.84 21902.28 00:19:22.331 00:19:22.331 15:37:50 -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:19:22.331 15:37:50 -- target/nvmf_example.sh@66 -- # nvmftestfini 00:19:22.331 15:37:50 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:22.331 15:37:50 -- nvmf/common.sh@117 -- # sync 00:19:22.331 15:37:50 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:22.331 15:37:50 -- nvmf/common.sh@120 -- # set +e 00:19:22.331 15:37:50 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:22.331 15:37:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:22.331 rmmod nvme_tcp 00:19:22.331 rmmod nvme_fabrics 00:19:22.331 rmmod nvme_keyring 00:19:22.331 15:37:50 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:22.331 15:37:50 -- nvmf/common.sh@124 -- # set -e 00:19:22.331 15:37:50 -- nvmf/common.sh@125 -- # return 0 00:19:22.331 15:37:50 -- nvmf/common.sh@478 -- # '[' -n 64843 ']' 00:19:22.331 15:37:50 -- nvmf/common.sh@479 -- # killprocess 64843 00:19:22.331 15:37:50 -- common/autotest_common.sh@936 -- # '[' -z 64843 ']' 00:19:22.331 15:37:50 -- common/autotest_common.sh@940 -- # kill -0 64843 00:19:22.331 15:37:50 -- common/autotest_common.sh@941 -- # uname 00:19:22.331 15:37:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:22.331 15:37:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 64843 00:19:22.331 15:37:50 -- common/autotest_common.sh@942 -- # process_name=nvmf 00:19:22.331 15:37:50 -- common/autotest_common.sh@946 -- # '[' nvmf = sudo ']' 00:19:22.331 killing process with pid 64843 00:19:22.331 15:37:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 64843' 00:19:22.331 15:37:50 -- common/autotest_common.sh@955 -- # kill 64843 00:19:22.331 15:37:50 -- common/autotest_common.sh@960 -- # wait 64843 00:19:22.331 nvmf threads initialize successfully 00:19:22.331 bdev subsystem init successfully 00:19:22.331 created a nvmf target service 00:19:22.331 create targets's poll groups done 00:19:22.331 all subsystems of target started 00:19:22.331 nvmf target is running 00:19:22.331 all subsystems of target stopped 00:19:22.331 destroy targets's poll groups done 00:19:22.331 destroyed the nvmf target service 00:19:22.331 bdev subsystem finish successfully 00:19:22.331 nvmf threads destroy successfully 00:19:22.332 15:37:50 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:22.332 15:37:50 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:22.332 15:37:50 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:22.332 15:37:50 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:22.332 15:37:50 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:22.332 15:37:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:22.332 15:37:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:22.332 15:37:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:22.332 15:37:51 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:22.332 15:37:51 -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:19:22.332 15:37:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:22.332 15:37:51 -- common/autotest_common.sh@10 -- # set +x 00:19:22.332 00:19:22.332 real 0m12.293s 00:19:22.332 user 0m44.162s 00:19:22.332 sys 0m1.982s 00:19:22.332 15:37:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:22.332 15:37:51 -- common/autotest_common.sh@10 -- # set +x 00:19:22.332 ************************************ 00:19:22.332 END TEST nvmf_example 00:19:22.332 ************************************ 00:19:22.332 15:37:51 -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:19:22.332 15:37:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:22.332 15:37:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:22.332 15:37:51 -- common/autotest_common.sh@10 -- # set +x 00:19:22.332 ************************************ 00:19:22.332 START TEST nvmf_filesystem 00:19:22.332 ************************************ 00:19:22.332 15:37:51 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:19:22.332 * Looking for test storage... 00:19:22.332 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:22.332 15:37:51 -- target/filesystem.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:19:22.332 15:37:51 -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:19:22.332 15:37:51 -- common/autotest_common.sh@34 -- # set -e 00:19:22.332 15:37:51 -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:19:22.332 15:37:51 -- common/autotest_common.sh@36 -- # shopt -s extglob 00:19:22.332 15:37:51 -- common/autotest_common.sh@38 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:19:22.332 15:37:51 -- common/autotest_common.sh@43 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:19:22.332 15:37:51 -- common/autotest_common.sh@44 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:19:22.332 15:37:51 -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:19:22.332 15:37:51 -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:19:22.332 15:37:51 -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:19:22.332 15:37:51 -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:19:22.332 15:37:51 -- common/build_config.sh@5 -- # CONFIG_USDT=y 00:19:22.332 15:37:51 -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:19:22.332 15:37:51 -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:19:22.332 15:37:51 -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:19:22.332 15:37:51 -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:19:22.332 15:37:51 -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:19:22.332 15:37:51 -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:19:22.332 15:37:51 -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:19:22.332 15:37:51 -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:19:22.332 15:37:51 -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:19:22.332 15:37:51 -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:19:22.332 15:37:51 -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:19:22.332 15:37:51 -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:19:22.332 15:37:51 -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:19:22.332 15:37:51 -- common/build_config.sh@19 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:19:22.332 15:37:51 -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:19:22.332 15:37:51 -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:19:22.332 15:37:51 -- common/build_config.sh@22 -- # CONFIG_CET=n 00:19:22.332 15:37:51 -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:19:22.332 15:37:51 -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:19:22.332 15:37:51 -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:19:22.332 15:37:51 -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:19:22.332 15:37:51 -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:19:22.332 15:37:51 -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:19:22.332 15:37:51 -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:19:22.332 15:37:51 -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:19:22.332 15:37:51 -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:19:22.332 15:37:51 -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:19:22.332 15:37:51 -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:19:22.332 15:37:51 -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:19:22.332 15:37:51 -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:19:22.332 15:37:51 -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:19:22.332 15:37:51 -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:19:22.332 15:37:51 -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:19:22.332 15:37:51 -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:19:22.332 15:37:51 -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:19:22.332 15:37:51 -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:19:22.332 15:37:51 -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:19:22.332 15:37:51 -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:19:22.332 15:37:51 -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:19:22.332 15:37:51 -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:19:22.332 15:37:51 -- common/build_config.sh@46 -- # CONFIG_COVERAGE=y 00:19:22.332 15:37:51 -- common/build_config.sh@47 -- # CONFIG_RDMA=y 00:19:22.332 15:37:51 -- common/build_config.sh@48 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:19:22.332 15:37:51 -- common/build_config.sh@49 -- # CONFIG_URING_PATH= 00:19:22.332 15:37:51 -- common/build_config.sh@50 -- # CONFIG_XNVME=n 00:19:22.332 15:37:51 -- common/build_config.sh@51 -- # CONFIG_VFIO_USER=n 00:19:22.332 15:37:51 -- common/build_config.sh@52 -- # CONFIG_ARCH=native 00:19:22.332 15:37:51 -- common/build_config.sh@53 -- # CONFIG_HAVE_EVP_MAC=y 00:19:22.332 15:37:51 -- common/build_config.sh@54 -- # CONFIG_URING_ZNS=n 00:19:22.332 15:37:51 -- common/build_config.sh@55 -- # CONFIG_WERROR=y 00:19:22.332 15:37:51 -- common/build_config.sh@56 -- # CONFIG_HAVE_LIBBSD=n 00:19:22.332 15:37:51 -- common/build_config.sh@57 -- # CONFIG_UBSAN=y 00:19:22.332 15:37:51 -- common/build_config.sh@58 -- # CONFIG_IPSEC_MB_DIR= 00:19:22.332 15:37:51 -- common/build_config.sh@59 -- # CONFIG_GOLANG=y 00:19:22.332 15:37:51 -- common/build_config.sh@60 -- # CONFIG_ISAL=y 00:19:22.332 15:37:51 -- common/build_config.sh@61 -- # CONFIG_IDXD_KERNEL=n 00:19:22.332 15:37:51 -- common/build_config.sh@62 -- # CONFIG_DPDK_LIB_DIR= 00:19:22.332 15:37:51 -- common/build_config.sh@63 -- # CONFIG_RDMA_PROV=verbs 00:19:22.332 15:37:51 -- common/build_config.sh@64 -- # CONFIG_APPS=y 00:19:22.332 15:37:51 -- common/build_config.sh@65 -- # CONFIG_SHARED=y 00:19:22.332 15:37:51 -- common/build_config.sh@66 -- # CONFIG_HAVE_KEYUTILS=n 00:19:22.332 15:37:51 -- common/build_config.sh@67 -- # CONFIG_FC_PATH= 00:19:22.332 15:37:51 -- common/build_config.sh@68 -- # CONFIG_DPDK_PKG_CONFIG=n 00:19:22.332 15:37:51 -- common/build_config.sh@69 -- # CONFIG_FC=n 00:19:22.332 15:37:51 -- common/build_config.sh@70 -- # CONFIG_AVAHI=y 00:19:22.332 15:37:51 -- common/build_config.sh@71 -- # CONFIG_FIO_PLUGIN=y 00:19:22.332 15:37:51 -- common/build_config.sh@72 -- # CONFIG_RAID5F=n 00:19:22.332 15:37:51 -- common/build_config.sh@73 -- # CONFIG_EXAMPLES=y 00:19:22.332 15:37:51 -- common/build_config.sh@74 -- # CONFIG_TESTS=y 00:19:22.332 15:37:51 -- common/build_config.sh@75 -- # CONFIG_CRYPTO_MLX5=n 00:19:22.332 15:37:51 -- common/build_config.sh@76 -- # CONFIG_MAX_LCORES= 00:19:22.332 15:37:51 -- common/build_config.sh@77 -- # CONFIG_IPSEC_MB=n 00:19:22.332 15:37:51 -- common/build_config.sh@78 -- # CONFIG_PGO_DIR= 00:19:22.332 15:37:51 -- common/build_config.sh@79 -- # CONFIG_DEBUG=y 00:19:22.332 15:37:51 -- common/build_config.sh@80 -- # CONFIG_DPDK_COMPRESSDEV=n 00:19:22.332 15:37:51 -- common/build_config.sh@81 -- # CONFIG_CROSS_PREFIX= 00:19:22.332 15:37:51 -- common/build_config.sh@82 -- # CONFIG_URING=n 00:19:22.332 15:37:51 -- common/autotest_common.sh@53 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:19:22.332 15:37:51 -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:19:22.332 15:37:51 -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:19:22.332 15:37:51 -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:19:22.332 15:37:51 -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:19:22.332 15:37:51 -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:19:22.332 15:37:51 -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:19:22.332 15:37:51 -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:19:22.332 15:37:51 -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:19:22.332 15:37:51 -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:19:22.332 15:37:51 -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:19:22.332 15:37:51 -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:19:22.332 15:37:51 -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:19:22.332 15:37:51 -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:19:22.332 15:37:51 -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:19:22.332 15:37:51 -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:19:22.332 #define SPDK_CONFIG_H 00:19:22.332 #define SPDK_CONFIG_APPS 1 00:19:22.332 #define SPDK_CONFIG_ARCH native 00:19:22.332 #undef SPDK_CONFIG_ASAN 00:19:22.332 #define SPDK_CONFIG_AVAHI 1 00:19:22.332 #undef SPDK_CONFIG_CET 00:19:22.332 #define SPDK_CONFIG_COVERAGE 1 00:19:22.332 #define SPDK_CONFIG_CROSS_PREFIX 00:19:22.332 #undef SPDK_CONFIG_CRYPTO 00:19:22.332 #undef SPDK_CONFIG_CRYPTO_MLX5 00:19:22.332 #undef SPDK_CONFIG_CUSTOMOCF 00:19:22.332 #undef SPDK_CONFIG_DAOS 00:19:22.332 #define SPDK_CONFIG_DAOS_DIR 00:19:22.332 #define SPDK_CONFIG_DEBUG 1 00:19:22.332 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:19:22.332 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:19:22.332 #define SPDK_CONFIG_DPDK_INC_DIR 00:19:22.333 #define SPDK_CONFIG_DPDK_LIB_DIR 00:19:22.333 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:19:22.333 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:19:22.333 #define SPDK_CONFIG_EXAMPLES 1 00:19:22.333 #undef SPDK_CONFIG_FC 00:19:22.333 #define SPDK_CONFIG_FC_PATH 00:19:22.333 #define SPDK_CONFIG_FIO_PLUGIN 1 00:19:22.333 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:19:22.333 #undef SPDK_CONFIG_FUSE 00:19:22.333 #undef SPDK_CONFIG_FUZZER 00:19:22.333 #define SPDK_CONFIG_FUZZER_LIB 00:19:22.333 #define SPDK_CONFIG_GOLANG 1 00:19:22.333 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:19:22.333 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:19:22.333 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:19:22.333 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:19:22.333 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:19:22.333 #undef SPDK_CONFIG_HAVE_LIBBSD 00:19:22.333 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:19:22.333 #define SPDK_CONFIG_IDXD 1 00:19:22.333 #undef SPDK_CONFIG_IDXD_KERNEL 00:19:22.333 #undef SPDK_CONFIG_IPSEC_MB 00:19:22.333 #define SPDK_CONFIG_IPSEC_MB_DIR 00:19:22.333 #define SPDK_CONFIG_ISAL 1 00:19:22.333 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:19:22.333 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:19:22.333 #define SPDK_CONFIG_LIBDIR 00:19:22.333 #undef SPDK_CONFIG_LTO 00:19:22.333 #define SPDK_CONFIG_MAX_LCORES 00:19:22.333 #define SPDK_CONFIG_NVME_CUSE 1 00:19:22.333 #undef SPDK_CONFIG_OCF 00:19:22.333 #define SPDK_CONFIG_OCF_PATH 00:19:22.333 #define SPDK_CONFIG_OPENSSL_PATH 00:19:22.333 #undef SPDK_CONFIG_PGO_CAPTURE 00:19:22.333 #define SPDK_CONFIG_PGO_DIR 00:19:22.333 #undef SPDK_CONFIG_PGO_USE 00:19:22.333 #define SPDK_CONFIG_PREFIX /usr/local 00:19:22.333 #undef SPDK_CONFIG_RAID5F 00:19:22.333 #undef SPDK_CONFIG_RBD 00:19:22.333 #define SPDK_CONFIG_RDMA 1 00:19:22.333 #define SPDK_CONFIG_RDMA_PROV verbs 00:19:22.333 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:19:22.333 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:19:22.333 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:19:22.333 #define SPDK_CONFIG_SHARED 1 00:19:22.333 #undef SPDK_CONFIG_SMA 00:19:22.333 #define SPDK_CONFIG_TESTS 1 00:19:22.333 #undef SPDK_CONFIG_TSAN 00:19:22.333 #define SPDK_CONFIG_UBLK 1 00:19:22.333 #define SPDK_CONFIG_UBSAN 1 00:19:22.333 #undef SPDK_CONFIG_UNIT_TESTS 00:19:22.333 #undef SPDK_CONFIG_URING 00:19:22.333 #define SPDK_CONFIG_URING_PATH 00:19:22.333 #undef SPDK_CONFIG_URING_ZNS 00:19:22.333 #define SPDK_CONFIG_USDT 1 00:19:22.333 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:19:22.333 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:19:22.333 #undef SPDK_CONFIG_VFIO_USER 00:19:22.333 #define SPDK_CONFIG_VFIO_USER_DIR 00:19:22.333 #define SPDK_CONFIG_VHOST 1 00:19:22.333 #define SPDK_CONFIG_VIRTIO 1 00:19:22.333 #undef SPDK_CONFIG_VTUNE 00:19:22.333 #define SPDK_CONFIG_VTUNE_DIR 00:19:22.333 #define SPDK_CONFIG_WERROR 1 00:19:22.333 #define SPDK_CONFIG_WPDK_DIR 00:19:22.333 #undef SPDK_CONFIG_XNVME 00:19:22.333 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:19:22.333 15:37:51 -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:19:22.333 15:37:51 -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:22.333 15:37:51 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:22.333 15:37:51 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:22.333 15:37:51 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:22.333 15:37:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.333 15:37:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.333 15:37:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.333 15:37:51 -- paths/export.sh@5 -- # export PATH 00:19:22.333 15:37:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.333 15:37:51 -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:19:22.333 15:37:51 -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:19:22.333 15:37:51 -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:19:22.333 15:37:51 -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:19:22.333 15:37:51 -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:19:22.333 15:37:51 -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:19:22.333 15:37:51 -- pm/common@67 -- # TEST_TAG=N/A 00:19:22.333 15:37:51 -- pm/common@68 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:19:22.333 15:37:51 -- pm/common@70 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:19:22.333 15:37:51 -- pm/common@71 -- # uname -s 00:19:22.333 15:37:51 -- pm/common@71 -- # PM_OS=Linux 00:19:22.333 15:37:51 -- pm/common@73 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:19:22.333 15:37:51 -- pm/common@74 -- # [[ Linux == FreeBSD ]] 00:19:22.333 15:37:51 -- pm/common@76 -- # [[ Linux == Linux ]] 00:19:22.333 15:37:51 -- pm/common@76 -- # [[ QEMU != QEMU ]] 00:19:22.333 15:37:51 -- pm/common@83 -- # MONITOR_RESOURCES_PIDS=() 00:19:22.333 15:37:51 -- pm/common@83 -- # declare -A MONITOR_RESOURCES_PIDS 00:19:22.333 15:37:51 -- pm/common@85 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/power 00:19:22.333 15:37:51 -- common/autotest_common.sh@57 -- # : 0 00:19:22.333 15:37:51 -- common/autotest_common.sh@58 -- # export RUN_NIGHTLY 00:19:22.333 15:37:51 -- common/autotest_common.sh@61 -- # : 0 00:19:22.333 15:37:51 -- common/autotest_common.sh@62 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:19:22.333 15:37:51 -- common/autotest_common.sh@63 -- # : 0 00:19:22.333 15:37:51 -- common/autotest_common.sh@64 -- # export SPDK_RUN_VALGRIND 00:19:22.333 15:37:51 -- common/autotest_common.sh@65 -- # : 1 00:19:22.333 15:37:51 -- common/autotest_common.sh@66 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:19:22.333 15:37:51 -- common/autotest_common.sh@67 -- # : 0 00:19:22.333 15:37:51 -- common/autotest_common.sh@68 -- # export SPDK_TEST_UNITTEST 00:19:22.333 15:37:51 -- common/autotest_common.sh@69 -- # : 00:19:22.333 15:37:51 -- common/autotest_common.sh@70 -- # export SPDK_TEST_AUTOBUILD 00:19:22.333 15:37:51 -- common/autotest_common.sh@71 -- # : 0 00:19:22.333 15:37:51 -- common/autotest_common.sh@72 -- # export SPDK_TEST_RELEASE_BUILD 00:19:22.333 15:37:51 -- common/autotest_common.sh@73 -- # : 0 00:19:22.333 15:37:51 -- common/autotest_common.sh@74 -- # export SPDK_TEST_ISAL 00:19:22.333 15:37:51 -- common/autotest_common.sh@75 -- # : 0 00:19:22.333 15:37:51 -- common/autotest_common.sh@76 -- # export SPDK_TEST_ISCSI 00:19:22.333 15:37:51 -- common/autotest_common.sh@77 -- # : 0 00:19:22.333 15:37:51 -- common/autotest_common.sh@78 -- # export SPDK_TEST_ISCSI_INITIATOR 00:19:22.333 15:37:51 -- common/autotest_common.sh@79 -- # : 0 00:19:22.333 15:37:51 -- common/autotest_common.sh@80 -- # export SPDK_TEST_NVME 00:19:22.333 15:37:51 -- common/autotest_common.sh@81 -- # : 0 00:19:22.333 15:37:51 -- common/autotest_common.sh@82 -- # export SPDK_TEST_NVME_PMR 00:19:22.333 15:37:51 -- common/autotest_common.sh@83 -- # : 0 00:19:22.333 15:37:51 -- common/autotest_common.sh@84 -- # export SPDK_TEST_NVME_BP 00:19:22.333 15:37:51 -- common/autotest_common.sh@85 -- # : 0 00:19:22.333 15:37:51 -- common/autotest_common.sh@86 -- # export SPDK_TEST_NVME_CLI 00:19:22.333 15:37:51 -- common/autotest_common.sh@87 -- # : 0 00:19:22.333 15:37:51 -- common/autotest_common.sh@88 -- # export SPDK_TEST_NVME_CUSE 00:19:22.333 15:37:51 -- common/autotest_common.sh@89 -- # : 0 00:19:22.333 15:37:51 -- common/autotest_common.sh@90 -- # export SPDK_TEST_NVME_FDP 00:19:22.333 15:37:51 -- common/autotest_common.sh@91 -- # : 1 00:19:22.333 15:37:51 -- common/autotest_common.sh@92 -- # export SPDK_TEST_NVMF 00:19:22.333 15:37:51 -- common/autotest_common.sh@93 -- # : 0 00:19:22.333 15:37:51 -- common/autotest_common.sh@94 -- # export SPDK_TEST_VFIOUSER 00:19:22.333 15:37:51 -- common/autotest_common.sh@95 -- # : 0 00:19:22.333 15:37:51 -- common/autotest_common.sh@96 -- # export SPDK_TEST_VFIOUSER_QEMU 00:19:22.333 15:37:51 -- common/autotest_common.sh@97 -- # : 0 00:19:22.333 15:37:51 -- common/autotest_common.sh@98 -- # export SPDK_TEST_FUZZER 00:19:22.333 15:37:51 -- common/autotest_common.sh@99 -- # : 0 00:19:22.333 15:37:51 -- common/autotest_common.sh@100 -- # export SPDK_TEST_FUZZER_SHORT 00:19:22.333 15:37:51 -- common/autotest_common.sh@101 -- # : tcp 00:19:22.333 15:37:51 -- common/autotest_common.sh@102 -- # export SPDK_TEST_NVMF_TRANSPORT 00:19:22.333 15:37:51 -- common/autotest_common.sh@103 -- # : 0 00:19:22.333 15:37:51 -- common/autotest_common.sh@104 -- # export SPDK_TEST_RBD 00:19:22.333 15:37:51 -- common/autotest_common.sh@105 -- # : 0 00:19:22.333 15:37:51 -- common/autotest_common.sh@106 -- # export SPDK_TEST_VHOST 00:19:22.333 15:37:51 -- common/autotest_common.sh@107 -- # : 0 00:19:22.333 15:37:51 -- common/autotest_common.sh@108 -- # export SPDK_TEST_BLOCKDEV 00:19:22.333 15:37:51 -- common/autotest_common.sh@109 -- # : 0 00:19:22.333 15:37:51 -- common/autotest_common.sh@110 -- # export SPDK_TEST_IOAT 00:19:22.333 15:37:51 -- common/autotest_common.sh@111 -- # : 0 00:19:22.333 15:37:51 -- common/autotest_common.sh@112 -- # export SPDK_TEST_BLOBFS 00:19:22.333 15:37:51 -- common/autotest_common.sh@113 -- # : 0 00:19:22.334 15:37:51 -- common/autotest_common.sh@114 -- # export SPDK_TEST_VHOST_INIT 00:19:22.334 15:37:51 -- common/autotest_common.sh@115 -- # : 0 00:19:22.334 15:37:51 -- common/autotest_common.sh@116 -- # export SPDK_TEST_LVOL 00:19:22.334 15:37:51 -- common/autotest_common.sh@117 -- # : 0 00:19:22.334 15:37:51 -- common/autotest_common.sh@118 -- # export SPDK_TEST_VBDEV_COMPRESS 00:19:22.334 15:37:51 -- common/autotest_common.sh@119 -- # : 0 00:19:22.334 15:37:51 -- common/autotest_common.sh@120 -- # export SPDK_RUN_ASAN 00:19:22.334 15:37:51 -- common/autotest_common.sh@121 -- # : 1 00:19:22.334 15:37:51 -- common/autotest_common.sh@122 -- # export SPDK_RUN_UBSAN 00:19:22.334 15:37:51 -- common/autotest_common.sh@123 -- # : 00:19:22.334 15:37:51 -- common/autotest_common.sh@124 -- # export SPDK_RUN_EXTERNAL_DPDK 00:19:22.334 15:37:51 -- common/autotest_common.sh@125 -- # : 0 00:19:22.334 15:37:51 -- common/autotest_common.sh@126 -- # export SPDK_RUN_NON_ROOT 00:19:22.334 15:37:51 -- common/autotest_common.sh@127 -- # : 0 00:19:22.334 15:37:51 -- common/autotest_common.sh@128 -- # export SPDK_TEST_CRYPTO 00:19:22.334 15:37:51 -- common/autotest_common.sh@129 -- # : 0 00:19:22.334 15:37:51 -- common/autotest_common.sh@130 -- # export SPDK_TEST_FTL 00:19:22.334 15:37:51 -- common/autotest_common.sh@131 -- # : 0 00:19:22.334 15:37:51 -- common/autotest_common.sh@132 -- # export SPDK_TEST_OCF 00:19:22.334 15:37:51 -- common/autotest_common.sh@133 -- # : 0 00:19:22.334 15:37:51 -- common/autotest_common.sh@134 -- # export SPDK_TEST_VMD 00:19:22.334 15:37:51 -- common/autotest_common.sh@135 -- # : 0 00:19:22.334 15:37:51 -- common/autotest_common.sh@136 -- # export SPDK_TEST_OPAL 00:19:22.334 15:37:51 -- common/autotest_common.sh@137 -- # : 00:19:22.334 15:37:51 -- common/autotest_common.sh@138 -- # export SPDK_TEST_NATIVE_DPDK 00:19:22.334 15:37:51 -- common/autotest_common.sh@139 -- # : true 00:19:22.334 15:37:51 -- common/autotest_common.sh@140 -- # export SPDK_AUTOTEST_X 00:19:22.334 15:37:51 -- common/autotest_common.sh@141 -- # : 0 00:19:22.334 15:37:51 -- common/autotest_common.sh@142 -- # export SPDK_TEST_RAID5 00:19:22.334 15:37:51 -- common/autotest_common.sh@143 -- # : 0 00:19:22.334 15:37:51 -- common/autotest_common.sh@144 -- # export SPDK_TEST_URING 00:19:22.334 15:37:51 -- common/autotest_common.sh@145 -- # : 1 00:19:22.334 15:37:51 -- common/autotest_common.sh@146 -- # export SPDK_TEST_USDT 00:19:22.334 15:37:51 -- common/autotest_common.sh@147 -- # : 0 00:19:22.334 15:37:51 -- common/autotest_common.sh@148 -- # export SPDK_TEST_USE_IGB_UIO 00:19:22.334 15:37:51 -- common/autotest_common.sh@149 -- # : 0 00:19:22.334 15:37:51 -- common/autotest_common.sh@150 -- # export SPDK_TEST_SCHEDULER 00:19:22.334 15:37:51 -- common/autotest_common.sh@151 -- # : 0 00:19:22.334 15:37:51 -- common/autotest_common.sh@152 -- # export SPDK_TEST_SCANBUILD 00:19:22.334 15:37:51 -- common/autotest_common.sh@153 -- # : 00:19:22.334 15:37:51 -- common/autotest_common.sh@154 -- # export SPDK_TEST_NVMF_NICS 00:19:22.334 15:37:51 -- common/autotest_common.sh@155 -- # : 0 00:19:22.334 15:37:51 -- common/autotest_common.sh@156 -- # export SPDK_TEST_SMA 00:19:22.334 15:37:51 -- common/autotest_common.sh@157 -- # : 0 00:19:22.334 15:37:51 -- common/autotest_common.sh@158 -- # export SPDK_TEST_DAOS 00:19:22.334 15:37:51 -- common/autotest_common.sh@159 -- # : 0 00:19:22.334 15:37:51 -- common/autotest_common.sh@160 -- # export SPDK_TEST_XNVME 00:19:22.334 15:37:51 -- common/autotest_common.sh@161 -- # : 0 00:19:22.334 15:37:51 -- common/autotest_common.sh@162 -- # export SPDK_TEST_ACCEL_DSA 00:19:22.334 15:37:51 -- common/autotest_common.sh@163 -- # : 0 00:19:22.334 15:37:51 -- common/autotest_common.sh@164 -- # export SPDK_TEST_ACCEL_IAA 00:19:22.334 15:37:51 -- common/autotest_common.sh@166 -- # : 00:19:22.334 15:37:51 -- common/autotest_common.sh@167 -- # export SPDK_TEST_FUZZER_TARGET 00:19:22.334 15:37:51 -- common/autotest_common.sh@168 -- # : 1 00:19:22.334 15:37:51 -- common/autotest_common.sh@169 -- # export SPDK_TEST_NVMF_MDNS 00:19:22.334 15:37:51 -- common/autotest_common.sh@170 -- # : 1 00:19:22.334 15:37:51 -- common/autotest_common.sh@171 -- # export SPDK_JSONRPC_GO_CLIENT 00:19:22.334 15:37:51 -- common/autotest_common.sh@174 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:19:22.334 15:37:51 -- common/autotest_common.sh@174 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:19:22.334 15:37:51 -- common/autotest_common.sh@175 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:19:22.334 15:37:51 -- common/autotest_common.sh@175 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:19:22.334 15:37:51 -- common/autotest_common.sh@176 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:19:22.334 15:37:51 -- common/autotest_common.sh@176 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:19:22.334 15:37:51 -- common/autotest_common.sh@177 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:19:22.334 15:37:51 -- common/autotest_common.sh@177 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:19:22.334 15:37:51 -- common/autotest_common.sh@180 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:19:22.334 15:37:51 -- common/autotest_common.sh@180 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:19:22.334 15:37:51 -- common/autotest_common.sh@184 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:19:22.334 15:37:51 -- common/autotest_common.sh@184 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:19:22.334 15:37:51 -- common/autotest_common.sh@188 -- # export PYTHONDONTWRITEBYTECODE=1 00:19:22.334 15:37:51 -- common/autotest_common.sh@188 -- # PYTHONDONTWRITEBYTECODE=1 00:19:22.334 15:37:51 -- common/autotest_common.sh@192 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:19:22.334 15:37:51 -- common/autotest_common.sh@192 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:19:22.334 15:37:51 -- common/autotest_common.sh@193 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:19:22.334 15:37:51 -- common/autotest_common.sh@193 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:19:22.334 15:37:51 -- common/autotest_common.sh@197 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:19:22.334 15:37:51 -- common/autotest_common.sh@198 -- # rm -rf /var/tmp/asan_suppression_file 00:19:22.334 15:37:51 -- common/autotest_common.sh@199 -- # cat 00:19:22.334 15:37:51 -- common/autotest_common.sh@225 -- # echo leak:libfuse3.so 00:19:22.334 15:37:51 -- common/autotest_common.sh@227 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:19:22.334 15:37:51 -- common/autotest_common.sh@227 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:19:22.334 15:37:51 -- common/autotest_common.sh@229 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:19:22.334 15:37:51 -- common/autotest_common.sh@229 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:19:22.334 15:37:51 -- common/autotest_common.sh@231 -- # '[' -z /var/spdk/dependencies ']' 00:19:22.334 15:37:51 -- common/autotest_common.sh@234 -- # export DEPENDENCY_DIR 00:19:22.334 15:37:51 -- common/autotest_common.sh@238 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:19:22.334 15:37:51 -- common/autotest_common.sh@238 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:19:22.334 15:37:51 -- common/autotest_common.sh@239 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:19:22.334 15:37:51 -- common/autotest_common.sh@239 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:19:22.334 15:37:51 -- common/autotest_common.sh@242 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:19:22.334 15:37:51 -- common/autotest_common.sh@242 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:19:22.334 15:37:51 -- common/autotest_common.sh@243 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:19:22.334 15:37:51 -- common/autotest_common.sh@243 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:19:22.334 15:37:51 -- common/autotest_common.sh@245 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:19:22.334 15:37:51 -- common/autotest_common.sh@245 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:19:22.334 15:37:51 -- common/autotest_common.sh@248 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:19:22.334 15:37:51 -- common/autotest_common.sh@248 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:19:22.334 15:37:51 -- common/autotest_common.sh@251 -- # '[' 0 -eq 0 ']' 00:19:22.334 15:37:51 -- common/autotest_common.sh@252 -- # export valgrind= 00:19:22.334 15:37:51 -- common/autotest_common.sh@252 -- # valgrind= 00:19:22.334 15:37:51 -- common/autotest_common.sh@258 -- # uname -s 00:19:22.334 15:37:51 -- common/autotest_common.sh@258 -- # '[' Linux = Linux ']' 00:19:22.334 15:37:51 -- common/autotest_common.sh@259 -- # HUGEMEM=4096 00:19:22.334 15:37:51 -- common/autotest_common.sh@260 -- # export CLEAR_HUGE=yes 00:19:22.334 15:37:51 -- common/autotest_common.sh@260 -- # CLEAR_HUGE=yes 00:19:22.334 15:37:51 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:19:22.334 15:37:51 -- common/autotest_common.sh@261 -- # [[ 0 -eq 1 ]] 00:19:22.334 15:37:51 -- common/autotest_common.sh@268 -- # MAKE=make 00:19:22.334 15:37:51 -- common/autotest_common.sh@269 -- # MAKEFLAGS=-j10 00:19:22.334 15:37:51 -- common/autotest_common.sh@285 -- # export HUGEMEM=4096 00:19:22.334 15:37:51 -- common/autotest_common.sh@285 -- # HUGEMEM=4096 00:19:22.334 15:37:51 -- common/autotest_common.sh@287 -- # NO_HUGE=() 00:19:22.334 15:37:51 -- common/autotest_common.sh@288 -- # TEST_MODE= 00:19:22.334 15:37:51 -- common/autotest_common.sh@289 -- # for i in "$@" 00:19:22.334 15:37:51 -- common/autotest_common.sh@290 -- # case "$i" in 00:19:22.335 15:37:51 -- common/autotest_common.sh@295 -- # TEST_TRANSPORT=tcp 00:19:22.335 15:37:51 -- common/autotest_common.sh@307 -- # [[ -z 65097 ]] 00:19:22.335 15:37:51 -- common/autotest_common.sh@307 -- # kill -0 65097 00:19:22.335 15:37:51 -- common/autotest_common.sh@1666 -- # set_test_storage 2147483648 00:19:22.335 15:37:51 -- common/autotest_common.sh@317 -- # [[ -v testdir ]] 00:19:22.335 15:37:51 -- common/autotest_common.sh@319 -- # local requested_size=2147483648 00:19:22.335 15:37:51 -- common/autotest_common.sh@320 -- # local mount target_dir 00:19:22.335 15:37:51 -- common/autotest_common.sh@322 -- # local -A mounts fss sizes avails uses 00:19:22.335 15:37:51 -- common/autotest_common.sh@323 -- # local source fs size avail mount use 00:19:22.335 15:37:51 -- common/autotest_common.sh@325 -- # local storage_fallback storage_candidates 00:19:22.335 15:37:51 -- common/autotest_common.sh@327 -- # mktemp -udt spdk.XXXXXX 00:19:22.335 15:37:51 -- common/autotest_common.sh@327 -- # storage_fallback=/tmp/spdk.ALn0Zn 00:19:22.335 15:37:51 -- common/autotest_common.sh@332 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:19:22.335 15:37:51 -- common/autotest_common.sh@334 -- # [[ -n '' ]] 00:19:22.335 15:37:51 -- common/autotest_common.sh@339 -- # [[ -n '' ]] 00:19:22.335 15:37:51 -- common/autotest_common.sh@344 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvmf/target /tmp/spdk.ALn0Zn/tests/target /tmp/spdk.ALn0Zn 00:19:22.335 15:37:51 -- common/autotest_common.sh@347 -- # requested_size=2214592512 00:19:22.335 15:37:51 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:19:22.335 15:37:51 -- common/autotest_common.sh@316 -- # df -T 00:19:22.335 15:37:51 -- common/autotest_common.sh@316 -- # grep -v Filesystem 00:19:22.335 15:37:51 -- common/autotest_common.sh@350 -- # mounts["$mount"]=devtmpfs 00:19:22.335 15:37:51 -- common/autotest_common.sh@350 -- # fss["$mount"]=devtmpfs 00:19:22.335 15:37:51 -- common/autotest_common.sh@351 -- # avails["$mount"]=4194304 00:19:22.335 15:37:51 -- common/autotest_common.sh@351 -- # sizes["$mount"]=4194304 00:19:22.335 15:37:51 -- common/autotest_common.sh@352 -- # uses["$mount"]=0 00:19:22.335 15:37:51 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:19:22.335 15:37:51 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:19:22.335 15:37:51 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:19:22.335 15:37:51 -- common/autotest_common.sh@351 -- # avails["$mount"]=6264516608 00:19:22.335 15:37:51 -- common/autotest_common.sh@351 -- # sizes["$mount"]=6267891712 00:19:22.335 15:37:51 -- common/autotest_common.sh@352 -- # uses["$mount"]=3375104 00:19:22.335 15:37:51 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:19:22.335 15:37:51 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:19:22.335 15:37:51 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:19:22.335 15:37:51 -- common/autotest_common.sh@351 -- # avails["$mount"]=2494353408 00:19:22.335 15:37:51 -- common/autotest_common.sh@351 -- # sizes["$mount"]=2507157504 00:19:22.335 15:37:51 -- common/autotest_common.sh@352 -- # uses["$mount"]=12804096 00:19:22.335 15:37:51 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:19:22.335 15:37:51 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/vda5 00:19:22.335 15:37:51 -- common/autotest_common.sh@350 -- # fss["$mount"]=btrfs 00:19:22.335 15:37:51 -- common/autotest_common.sh@351 -- # avails["$mount"]=13794390016 00:19:22.335 15:37:51 -- common/autotest_common.sh@351 -- # sizes["$mount"]=20314062848 00:19:22.335 15:37:51 -- common/autotest_common.sh@352 -- # uses["$mount"]=5230235648 00:19:22.335 15:37:51 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:19:22.335 15:37:51 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/vda5 00:19:22.335 15:37:51 -- common/autotest_common.sh@350 -- # fss["$mount"]=btrfs 00:19:22.335 15:37:51 -- common/autotest_common.sh@351 -- # avails["$mount"]=13794390016 00:19:22.335 15:37:51 -- common/autotest_common.sh@351 -- # sizes["$mount"]=20314062848 00:19:22.335 15:37:51 -- common/autotest_common.sh@352 -- # uses["$mount"]=5230235648 00:19:22.335 15:37:51 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:19:22.335 15:37:51 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/vda2 00:19:22.335 15:37:51 -- common/autotest_common.sh@350 -- # fss["$mount"]=ext4 00:19:22.335 15:37:51 -- common/autotest_common.sh@351 -- # avails["$mount"]=843546624 00:19:22.335 15:37:51 -- common/autotest_common.sh@351 -- # sizes["$mount"]=1012768768 00:19:22.335 15:37:51 -- common/autotest_common.sh@352 -- # uses["$mount"]=100016128 00:19:22.335 15:37:51 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:19:22.335 15:37:51 -- common/autotest_common.sh@350 -- # mounts["$mount"]=/dev/vda3 00:19:22.335 15:37:51 -- common/autotest_common.sh@350 -- # fss["$mount"]=vfat 00:19:22.335 15:37:51 -- common/autotest_common.sh@351 -- # avails["$mount"]=92499968 00:19:22.335 15:37:51 -- common/autotest_common.sh@351 -- # sizes["$mount"]=104607744 00:19:22.335 15:37:51 -- common/autotest_common.sh@352 -- # uses["$mount"]=12107776 00:19:22.335 15:37:51 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:19:22.335 15:37:51 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:19:22.335 15:37:51 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:19:22.335 15:37:51 -- common/autotest_common.sh@351 -- # avails["$mount"]=6267760640 00:19:22.335 15:37:51 -- common/autotest_common.sh@351 -- # sizes["$mount"]=6267895808 00:19:22.335 15:37:51 -- common/autotest_common.sh@352 -- # uses["$mount"]=135168 00:19:22.335 15:37:51 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:19:22.335 15:37:51 -- common/autotest_common.sh@350 -- # mounts["$mount"]=tmpfs 00:19:22.335 15:37:51 -- common/autotest_common.sh@350 -- # fss["$mount"]=tmpfs 00:19:22.335 15:37:51 -- common/autotest_common.sh@351 -- # avails["$mount"]=1253572608 00:19:22.335 15:37:51 -- common/autotest_common.sh@351 -- # sizes["$mount"]=1253576704 00:19:22.335 15:37:51 -- common/autotest_common.sh@352 -- # uses["$mount"]=4096 00:19:22.335 15:37:51 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:19:22.335 15:37:51 -- common/autotest_common.sh@350 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvmf-tcp-vg-autotest/fedora38-libvirt/output 00:19:22.335 15:37:51 -- common/autotest_common.sh@350 -- # fss["$mount"]=fuse.sshfs 00:19:22.335 15:37:51 -- common/autotest_common.sh@351 -- # avails["$mount"]=92660158464 00:19:22.335 15:37:51 -- common/autotest_common.sh@351 -- # sizes["$mount"]=105088212992 00:19:22.335 15:37:51 -- common/autotest_common.sh@352 -- # uses["$mount"]=7042621440 00:19:22.335 15:37:51 -- common/autotest_common.sh@349 -- # read -r source fs size use avail _ mount 00:19:22.335 15:37:51 -- common/autotest_common.sh@355 -- # printf '* Looking for test storage...\n' 00:19:22.335 * Looking for test storage... 00:19:22.335 15:37:51 -- common/autotest_common.sh@357 -- # local target_space new_size 00:19:22.335 15:37:51 -- common/autotest_common.sh@358 -- # for target_dir in "${storage_candidates[@]}" 00:19:22.335 15:37:51 -- common/autotest_common.sh@361 -- # df /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:22.335 15:37:51 -- common/autotest_common.sh@361 -- # awk '$1 !~ /Filesystem/{print $6}' 00:19:22.335 15:37:51 -- common/autotest_common.sh@361 -- # mount=/home 00:19:22.335 15:37:51 -- common/autotest_common.sh@363 -- # target_space=13794390016 00:19:22.335 15:37:51 -- common/autotest_common.sh@364 -- # (( target_space == 0 || target_space < requested_size )) 00:19:22.335 15:37:51 -- common/autotest_common.sh@367 -- # (( target_space >= requested_size )) 00:19:22.335 15:37:51 -- common/autotest_common.sh@369 -- # [[ btrfs == tmpfs ]] 00:19:22.335 15:37:51 -- common/autotest_common.sh@369 -- # [[ btrfs == ramfs ]] 00:19:22.335 15:37:51 -- common/autotest_common.sh@369 -- # [[ /home == / ]] 00:19:22.335 15:37:51 -- common/autotest_common.sh@376 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:22.335 15:37:51 -- common/autotest_common.sh@376 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:22.335 15:37:51 -- common/autotest_common.sh@377 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:22.335 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:22.335 15:37:51 -- common/autotest_common.sh@378 -- # return 0 00:19:22.335 15:37:51 -- common/autotest_common.sh@1668 -- # set -o errtrace 00:19:22.335 15:37:51 -- common/autotest_common.sh@1669 -- # shopt -s extdebug 00:19:22.335 15:37:51 -- common/autotest_common.sh@1670 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:19:22.335 15:37:51 -- common/autotest_common.sh@1672 -- # PS4=' \t -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:19:22.335 15:37:51 -- common/autotest_common.sh@1673 -- # true 00:19:22.335 15:37:51 -- common/autotest_common.sh@1675 -- # xtrace_fd 00:19:22.335 15:37:51 -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:19:22.335 15:37:51 -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:19:22.335 15:37:51 -- common/autotest_common.sh@27 -- # exec 00:19:22.335 15:37:51 -- common/autotest_common.sh@29 -- # exec 00:19:22.335 15:37:51 -- common/autotest_common.sh@31 -- # xtrace_restore 00:19:22.335 15:37:51 -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:19:22.335 15:37:51 -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:19:22.335 15:37:51 -- common/autotest_common.sh@18 -- # set -x 00:19:22.335 15:37:51 -- target/filesystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:22.335 15:37:51 -- nvmf/common.sh@7 -- # uname -s 00:19:22.335 15:37:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:22.335 15:37:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:22.335 15:37:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:22.335 15:37:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:22.335 15:37:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:22.335 15:37:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:22.335 15:37:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:22.335 15:37:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:22.335 15:37:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:22.335 15:37:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:22.335 15:37:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:19:22.335 15:37:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:19:22.335 15:37:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:22.335 15:37:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:22.335 15:37:51 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:22.335 15:37:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:22.335 15:37:51 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:22.335 15:37:51 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:22.336 15:37:51 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:22.336 15:37:51 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:22.336 15:37:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.336 15:37:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.336 15:37:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.336 15:37:51 -- paths/export.sh@5 -- # export PATH 00:19:22.336 15:37:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.336 15:37:51 -- nvmf/common.sh@47 -- # : 0 00:19:22.336 15:37:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:22.336 15:37:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:22.336 15:37:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:22.336 15:37:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:22.336 15:37:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:22.336 15:37:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:22.336 15:37:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:22.336 15:37:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:22.336 15:37:51 -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:19:22.336 15:37:51 -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:22.336 15:37:51 -- target/filesystem.sh@15 -- # nvmftestinit 00:19:22.336 15:37:51 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:22.336 15:37:51 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:22.336 15:37:51 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:22.336 15:37:51 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:22.336 15:37:51 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:22.336 15:37:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:22.336 15:37:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:22.336 15:37:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:22.336 15:37:51 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:19:22.336 15:37:51 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:19:22.336 15:37:51 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:19:22.336 15:37:51 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:19:22.336 15:37:51 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:19:22.336 15:37:51 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:19:22.336 15:37:51 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:22.336 15:37:51 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:22.336 15:37:51 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:22.336 15:37:51 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:22.336 15:37:51 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:22.336 15:37:51 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:22.336 15:37:51 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:22.336 15:37:51 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:22.336 15:37:51 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:22.336 15:37:51 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:22.336 15:37:51 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:22.336 15:37:51 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:22.336 15:37:51 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:22.336 15:37:51 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:22.336 Cannot find device "nvmf_tgt_br" 00:19:22.336 15:37:51 -- nvmf/common.sh@155 -- # true 00:19:22.336 15:37:51 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:22.336 Cannot find device "nvmf_tgt_br2" 00:19:22.336 15:37:51 -- nvmf/common.sh@156 -- # true 00:19:22.336 15:37:51 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:22.336 15:37:51 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:22.336 Cannot find device "nvmf_tgt_br" 00:19:22.336 15:37:51 -- nvmf/common.sh@158 -- # true 00:19:22.336 15:37:51 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:22.336 Cannot find device "nvmf_tgt_br2" 00:19:22.336 15:37:51 -- nvmf/common.sh@159 -- # true 00:19:22.336 15:37:51 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:22.336 15:37:51 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:22.336 15:37:51 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:22.336 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:22.336 15:37:51 -- nvmf/common.sh@162 -- # true 00:19:22.336 15:37:51 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:22.336 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:22.336 15:37:51 -- nvmf/common.sh@163 -- # true 00:19:22.336 15:37:51 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:22.336 15:37:51 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:22.336 15:37:51 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:22.336 15:37:51 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:22.336 15:37:51 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:22.336 15:37:51 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:22.336 15:37:51 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:22.336 15:37:51 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:22.336 15:37:51 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:22.336 15:37:51 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:22.336 15:37:51 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:22.336 15:37:51 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:22.336 15:37:51 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:22.336 15:37:51 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:22.336 15:37:51 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:22.336 15:37:51 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:22.336 15:37:51 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:22.336 15:37:51 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:22.336 15:37:51 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:22.336 15:37:51 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:22.336 15:37:51 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:22.336 15:37:51 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:22.336 15:37:51 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:22.336 15:37:51 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:22.336 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:22.336 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 00:19:22.336 00:19:22.336 --- 10.0.0.2 ping statistics --- 00:19:22.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:22.336 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:19:22.336 15:37:51 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:22.336 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:22.336 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:19:22.336 00:19:22.336 --- 10.0.0.3 ping statistics --- 00:19:22.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:22.337 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:19:22.337 15:37:51 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:22.337 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:22.337 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.042 ms 00:19:22.337 00:19:22.337 --- 10.0.0.1 ping statistics --- 00:19:22.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:22.337 rtt min/avg/max/mdev = 0.042/0.042/0.042/0.000 ms 00:19:22.337 15:37:51 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:22.337 15:37:51 -- nvmf/common.sh@422 -- # return 0 00:19:22.337 15:37:51 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:22.337 15:37:51 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:22.337 15:37:51 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:22.337 15:37:51 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:22.337 15:37:51 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:22.337 15:37:51 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:22.337 15:37:51 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:22.337 15:37:51 -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:19:22.337 15:37:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:22.337 15:37:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:22.337 15:37:51 -- common/autotest_common.sh@10 -- # set +x 00:19:22.337 ************************************ 00:19:22.337 START TEST nvmf_filesystem_no_in_capsule 00:19:22.337 ************************************ 00:19:22.337 15:37:51 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 0 00:19:22.337 15:37:51 -- target/filesystem.sh@47 -- # in_capsule=0 00:19:22.337 15:37:51 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:19:22.337 15:37:51 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:22.337 15:37:51 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:22.337 15:37:51 -- common/autotest_common.sh@10 -- # set +x 00:19:22.337 15:37:51 -- nvmf/common.sh@470 -- # nvmfpid=65262 00:19:22.337 15:37:51 -- nvmf/common.sh@471 -- # waitforlisten 65262 00:19:22.337 15:37:51 -- common/autotest_common.sh@817 -- # '[' -z 65262 ']' 00:19:22.337 15:37:51 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:22.337 15:37:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:22.337 15:37:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:22.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:22.337 15:37:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:22.337 15:37:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:22.337 15:37:51 -- common/autotest_common.sh@10 -- # set +x 00:19:22.337 [2024-04-26 15:37:51.944942] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:19:22.337 [2024-04-26 15:37:51.945047] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:22.337 [2024-04-26 15:37:52.087216] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:22.337 [2024-04-26 15:37:52.212590] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:22.337 [2024-04-26 15:37:52.212655] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:22.337 [2024-04-26 15:37:52.212667] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:22.337 [2024-04-26 15:37:52.212675] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:22.337 [2024-04-26 15:37:52.212683] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:22.337 [2024-04-26 15:37:52.212854] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:22.337 [2024-04-26 15:37:52.213442] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:22.337 [2024-04-26 15:37:52.213621] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:22.337 [2024-04-26 15:37:52.213664] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:22.594 15:37:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:22.594 15:37:52 -- common/autotest_common.sh@850 -- # return 0 00:19:22.594 15:37:52 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:22.594 15:37:52 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:22.594 15:37:52 -- common/autotest_common.sh@10 -- # set +x 00:19:22.853 15:37:52 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:22.853 15:37:52 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:19:22.853 15:37:52 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:19:22.853 15:37:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.853 15:37:52 -- common/autotest_common.sh@10 -- # set +x 00:19:22.853 [2024-04-26 15:37:52.920603] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:22.853 15:37:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.853 15:37:52 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:19:22.853 15:37:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.853 15:37:52 -- common/autotest_common.sh@10 -- # set +x 00:19:22.853 Malloc1 00:19:22.853 15:37:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.853 15:37:53 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:22.853 15:37:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.853 15:37:53 -- common/autotest_common.sh@10 -- # set +x 00:19:22.853 15:37:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.853 15:37:53 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:22.853 15:37:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.853 15:37:53 -- common/autotest_common.sh@10 -- # set +x 00:19:22.853 15:37:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.853 15:37:53 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:22.853 15:37:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.853 15:37:53 -- common/autotest_common.sh@10 -- # set +x 00:19:22.853 [2024-04-26 15:37:53.109445] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:22.853 15:37:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.853 15:37:53 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:19:22.853 15:37:53 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:19:22.853 15:37:53 -- common/autotest_common.sh@1365 -- # local bdev_info 00:19:22.853 15:37:53 -- common/autotest_common.sh@1366 -- # local bs 00:19:22.853 15:37:53 -- common/autotest_common.sh@1367 -- # local nb 00:19:22.853 15:37:53 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:19:22.853 15:37:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:22.853 15:37:53 -- common/autotest_common.sh@10 -- # set +x 00:19:22.853 15:37:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:22.853 15:37:53 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:19:22.853 { 00:19:22.853 "aliases": [ 00:19:22.853 "806cc1ad-3ba4-472b-b459-d3117273f8e4" 00:19:22.853 ], 00:19:22.853 "assigned_rate_limits": { 00:19:22.853 "r_mbytes_per_sec": 0, 00:19:22.853 "rw_ios_per_sec": 0, 00:19:22.853 "rw_mbytes_per_sec": 0, 00:19:22.853 "w_mbytes_per_sec": 0 00:19:22.853 }, 00:19:22.853 "block_size": 512, 00:19:22.853 "claim_type": "exclusive_write", 00:19:22.853 "claimed": true, 00:19:22.853 "driver_specific": {}, 00:19:22.853 "memory_domains": [ 00:19:22.853 { 00:19:22.853 "dma_device_id": "system", 00:19:22.853 "dma_device_type": 1 00:19:22.853 }, 00:19:22.853 { 00:19:22.853 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:22.853 "dma_device_type": 2 00:19:22.853 } 00:19:22.853 ], 00:19:22.853 "name": "Malloc1", 00:19:22.853 "num_blocks": 1048576, 00:19:22.853 "product_name": "Malloc disk", 00:19:22.853 "supported_io_types": { 00:19:22.853 "abort": true, 00:19:22.853 "compare": false, 00:19:22.853 "compare_and_write": false, 00:19:22.853 "flush": true, 00:19:22.853 "nvme_admin": false, 00:19:22.853 "nvme_io": false, 00:19:22.853 "read": true, 00:19:22.853 "reset": true, 00:19:22.853 "unmap": true, 00:19:22.853 "write": true, 00:19:22.853 "write_zeroes": true 00:19:22.853 }, 00:19:22.853 "uuid": "806cc1ad-3ba4-472b-b459-d3117273f8e4", 00:19:22.853 "zoned": false 00:19:22.853 } 00:19:22.853 ]' 00:19:22.853 15:37:53 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:19:23.111 15:37:53 -- common/autotest_common.sh@1369 -- # bs=512 00:19:23.111 15:37:53 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:19:23.111 15:37:53 -- common/autotest_common.sh@1370 -- # nb=1048576 00:19:23.111 15:37:53 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:19:23.111 15:37:53 -- common/autotest_common.sh@1374 -- # echo 512 00:19:23.111 15:37:53 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:19:23.111 15:37:53 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 --hostid=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:23.368 15:37:53 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:19:23.368 15:37:53 -- common/autotest_common.sh@1184 -- # local i=0 00:19:23.368 15:37:53 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:19:23.368 15:37:53 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:19:23.368 15:37:53 -- common/autotest_common.sh@1191 -- # sleep 2 00:19:25.266 15:37:55 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:19:25.266 15:37:55 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:19:25.266 15:37:55 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:19:25.266 15:37:55 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:19:25.266 15:37:55 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:19:25.266 15:37:55 -- common/autotest_common.sh@1194 -- # return 0 00:19:25.266 15:37:55 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:19:25.266 15:37:55 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:19:25.266 15:37:55 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:19:25.266 15:37:55 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:19:25.266 15:37:55 -- setup/common.sh@76 -- # local dev=nvme0n1 00:19:25.266 15:37:55 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:25.266 15:37:55 -- setup/common.sh@80 -- # echo 536870912 00:19:25.266 15:37:55 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:19:25.266 15:37:55 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:19:25.266 15:37:55 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:19:25.266 15:37:55 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:19:25.266 15:37:55 -- target/filesystem.sh@69 -- # partprobe 00:19:25.560 15:37:55 -- target/filesystem.sh@70 -- # sleep 1 00:19:26.538 15:37:56 -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:19:26.538 15:37:56 -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:19:26.538 15:37:56 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:19:26.538 15:37:56 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:26.538 15:37:56 -- common/autotest_common.sh@10 -- # set +x 00:19:26.538 ************************************ 00:19:26.538 START TEST filesystem_ext4 00:19:26.538 ************************************ 00:19:26.538 15:37:56 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:19:26.538 15:37:56 -- target/filesystem.sh@18 -- # fstype=ext4 00:19:26.538 15:37:56 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:19:26.538 15:37:56 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:19:26.538 15:37:56 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:19:26.538 15:37:56 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:19:26.538 15:37:56 -- common/autotest_common.sh@914 -- # local i=0 00:19:26.538 15:37:56 -- common/autotest_common.sh@915 -- # local force 00:19:26.538 15:37:56 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:19:26.538 15:37:56 -- common/autotest_common.sh@918 -- # force=-F 00:19:26.538 15:37:56 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:19:26.538 mke2fs 1.46.5 (30-Dec-2021) 00:19:26.538 Discarding device blocks: 0/522240 done 00:19:26.538 Creating filesystem with 522240 1k blocks and 130560 inodes 00:19:26.538 Filesystem UUID: decc2756-091a-4f6c-8822-fd04b74e35e4 00:19:26.538 Superblock backups stored on blocks: 00:19:26.538 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:19:26.538 00:19:26.538 Allocating group tables: 0/64 done 00:19:26.538 Writing inode tables: 0/64 done 00:19:26.538 Creating journal (8192 blocks): done 00:19:26.538 Writing superblocks and filesystem accounting information: 0/64 done 00:19:26.538 00:19:26.538 15:37:56 -- common/autotest_common.sh@931 -- # return 0 00:19:26.538 15:37:56 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:19:26.796 15:37:56 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:19:26.796 15:37:56 -- target/filesystem.sh@25 -- # sync 00:19:26.796 15:37:57 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:19:26.796 15:37:57 -- target/filesystem.sh@27 -- # sync 00:19:26.796 15:37:57 -- target/filesystem.sh@29 -- # i=0 00:19:26.796 15:37:57 -- target/filesystem.sh@30 -- # umount /mnt/device 00:19:26.796 15:37:57 -- target/filesystem.sh@37 -- # kill -0 65262 00:19:26.796 15:37:57 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:19:26.796 15:37:57 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:19:26.796 15:37:57 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:19:26.796 15:37:57 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:19:26.796 ************************************ 00:19:26.796 END TEST filesystem_ext4 00:19:26.796 ************************************ 00:19:26.796 00:19:26.796 real 0m0.371s 00:19:26.796 user 0m0.018s 00:19:26.796 sys 0m0.055s 00:19:26.796 15:37:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:26.796 15:37:57 -- common/autotest_common.sh@10 -- # set +x 00:19:27.053 15:37:57 -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:19:27.053 15:37:57 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:19:27.053 15:37:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:27.053 15:37:57 -- common/autotest_common.sh@10 -- # set +x 00:19:27.053 ************************************ 00:19:27.053 START TEST filesystem_btrfs 00:19:27.053 ************************************ 00:19:27.053 15:37:57 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:19:27.053 15:37:57 -- target/filesystem.sh@18 -- # fstype=btrfs 00:19:27.053 15:37:57 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:19:27.053 15:37:57 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:19:27.053 15:37:57 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:19:27.053 15:37:57 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:19:27.053 15:37:57 -- common/autotest_common.sh@914 -- # local i=0 00:19:27.053 15:37:57 -- common/autotest_common.sh@915 -- # local force 00:19:27.053 15:37:57 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:19:27.053 15:37:57 -- common/autotest_common.sh@920 -- # force=-f 00:19:27.053 15:37:57 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:19:27.053 btrfs-progs v6.6.2 00:19:27.053 See https://btrfs.readthedocs.io for more information. 00:19:27.053 00:19:27.053 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:19:27.053 NOTE: several default settings have changed in version 5.15, please make sure 00:19:27.053 this does not affect your deployments: 00:19:27.053 - DUP for metadata (-m dup) 00:19:27.053 - enabled no-holes (-O no-holes) 00:19:27.053 - enabled free-space-tree (-R free-space-tree) 00:19:27.053 00:19:27.053 Label: (null) 00:19:27.053 UUID: b32805e4-a82b-44ec-809c-0e272313511e 00:19:27.053 Node size: 16384 00:19:27.053 Sector size: 4096 00:19:27.053 Filesystem size: 510.00MiB 00:19:27.053 Block group profiles: 00:19:27.053 Data: single 8.00MiB 00:19:27.053 Metadata: DUP 32.00MiB 00:19:27.053 System: DUP 8.00MiB 00:19:27.053 SSD detected: yes 00:19:27.053 Zoned device: no 00:19:27.053 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:19:27.053 Runtime features: free-space-tree 00:19:27.053 Checksum: crc32c 00:19:27.053 Number of devices: 1 00:19:27.053 Devices: 00:19:27.053 ID SIZE PATH 00:19:27.053 1 510.00MiB /dev/nvme0n1p1 00:19:27.053 00:19:27.053 15:37:57 -- common/autotest_common.sh@931 -- # return 0 00:19:27.053 15:37:57 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:19:27.053 15:37:57 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:19:27.053 15:37:57 -- target/filesystem.sh@25 -- # sync 00:19:27.310 15:37:57 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:19:27.310 15:37:57 -- target/filesystem.sh@27 -- # sync 00:19:27.310 15:37:57 -- target/filesystem.sh@29 -- # i=0 00:19:27.310 15:37:57 -- target/filesystem.sh@30 -- # umount /mnt/device 00:19:27.310 15:37:57 -- target/filesystem.sh@37 -- # kill -0 65262 00:19:27.310 15:37:57 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:19:27.310 15:37:57 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:19:27.310 15:37:57 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:19:27.310 15:37:57 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:19:27.310 ************************************ 00:19:27.310 END TEST filesystem_btrfs 00:19:27.310 ************************************ 00:19:27.310 00:19:27.310 real 0m0.232s 00:19:27.310 user 0m0.019s 00:19:27.310 sys 0m0.058s 00:19:27.310 15:37:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:27.310 15:37:57 -- common/autotest_common.sh@10 -- # set +x 00:19:27.310 15:37:57 -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:19:27.310 15:37:57 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:19:27.310 15:37:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:27.310 15:37:57 -- common/autotest_common.sh@10 -- # set +x 00:19:27.310 ************************************ 00:19:27.310 START TEST filesystem_xfs 00:19:27.310 ************************************ 00:19:27.310 15:37:57 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:19:27.310 15:37:57 -- target/filesystem.sh@18 -- # fstype=xfs 00:19:27.310 15:37:57 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:19:27.310 15:37:57 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:19:27.311 15:37:57 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:19:27.311 15:37:57 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:19:27.311 15:37:57 -- common/autotest_common.sh@914 -- # local i=0 00:19:27.311 15:37:57 -- common/autotest_common.sh@915 -- # local force 00:19:27.311 15:37:57 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:19:27.311 15:37:57 -- common/autotest_common.sh@920 -- # force=-f 00:19:27.311 15:37:57 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:19:27.311 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:19:27.311 = sectsz=512 attr=2, projid32bit=1 00:19:27.311 = crc=1 finobt=1, sparse=1, rmapbt=0 00:19:27.311 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:19:27.311 data = bsize=4096 blocks=130560, imaxpct=25 00:19:27.311 = sunit=0 swidth=0 blks 00:19:27.311 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:19:27.311 log =internal log bsize=4096 blocks=16384, version=2 00:19:27.311 = sectsz=512 sunit=0 blks, lazy-count=1 00:19:27.311 realtime =none extsz=4096 blocks=0, rtextents=0 00:19:28.242 Discarding blocks...Done. 00:19:28.242 15:37:58 -- common/autotest_common.sh@931 -- # return 0 00:19:28.243 15:37:58 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:19:30.770 15:38:00 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:19:30.770 15:38:00 -- target/filesystem.sh@25 -- # sync 00:19:30.770 15:38:00 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:19:30.770 15:38:00 -- target/filesystem.sh@27 -- # sync 00:19:30.770 15:38:00 -- target/filesystem.sh@29 -- # i=0 00:19:30.770 15:38:00 -- target/filesystem.sh@30 -- # umount /mnt/device 00:19:30.770 15:38:00 -- target/filesystem.sh@37 -- # kill -0 65262 00:19:30.770 15:38:00 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:19:30.770 15:38:00 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:19:30.770 15:38:00 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:19:30.770 15:38:00 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:19:30.770 ************************************ 00:19:30.770 END TEST filesystem_xfs 00:19:30.770 ************************************ 00:19:30.770 00:19:30.770 real 0m3.131s 00:19:30.770 user 0m0.017s 00:19:30.770 sys 0m0.057s 00:19:30.770 15:38:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:30.770 15:38:00 -- common/autotest_common.sh@10 -- # set +x 00:19:30.770 15:38:00 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:19:30.770 15:38:00 -- target/filesystem.sh@93 -- # sync 00:19:30.770 15:38:00 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:30.770 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:30.770 15:38:00 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:30.770 15:38:00 -- common/autotest_common.sh@1205 -- # local i=0 00:19:30.770 15:38:00 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:19:30.770 15:38:00 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:30.770 15:38:00 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:19:30.770 15:38:00 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:30.770 15:38:00 -- common/autotest_common.sh@1217 -- # return 0 00:19:30.770 15:38:00 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:30.770 15:38:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:30.770 15:38:00 -- common/autotest_common.sh@10 -- # set +x 00:19:30.770 15:38:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:30.770 15:38:00 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:19:30.770 15:38:00 -- target/filesystem.sh@101 -- # killprocess 65262 00:19:30.770 15:38:00 -- common/autotest_common.sh@936 -- # '[' -z 65262 ']' 00:19:30.770 15:38:00 -- common/autotest_common.sh@940 -- # kill -0 65262 00:19:30.770 15:38:00 -- common/autotest_common.sh@941 -- # uname 00:19:30.770 15:38:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:30.770 15:38:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65262 00:19:30.770 killing process with pid 65262 00:19:30.770 15:38:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:30.770 15:38:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:30.770 15:38:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65262' 00:19:30.770 15:38:00 -- common/autotest_common.sh@955 -- # kill 65262 00:19:30.770 15:38:00 -- common/autotest_common.sh@960 -- # wait 65262 00:19:31.027 ************************************ 00:19:31.027 END TEST nvmf_filesystem_no_in_capsule 00:19:31.027 ************************************ 00:19:31.027 15:38:01 -- target/filesystem.sh@102 -- # nvmfpid= 00:19:31.027 00:19:31.027 real 0m9.395s 00:19:31.027 user 0m35.239s 00:19:31.027 sys 0m1.747s 00:19:31.027 15:38:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:31.027 15:38:01 -- common/autotest_common.sh@10 -- # set +x 00:19:31.027 15:38:01 -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:19:31.027 15:38:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:31.027 15:38:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:31.027 15:38:01 -- common/autotest_common.sh@10 -- # set +x 00:19:31.285 ************************************ 00:19:31.285 START TEST nvmf_filesystem_in_capsule 00:19:31.285 ************************************ 00:19:31.285 15:38:01 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_part 4096 00:19:31.285 15:38:01 -- target/filesystem.sh@47 -- # in_capsule=4096 00:19:31.285 15:38:01 -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:19:31.285 15:38:01 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:31.285 15:38:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:31.285 15:38:01 -- common/autotest_common.sh@10 -- # set +x 00:19:31.285 15:38:01 -- nvmf/common.sh@470 -- # nvmfpid=65593 00:19:31.285 15:38:01 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:31.285 15:38:01 -- nvmf/common.sh@471 -- # waitforlisten 65593 00:19:31.285 15:38:01 -- common/autotest_common.sh@817 -- # '[' -z 65593 ']' 00:19:31.285 15:38:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:31.285 15:38:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:31.285 15:38:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:31.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:31.285 15:38:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:31.285 15:38:01 -- common/autotest_common.sh@10 -- # set +x 00:19:31.285 [2024-04-26 15:38:01.445627] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:19:31.285 [2024-04-26 15:38:01.445953] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:31.543 [2024-04-26 15:38:01.588659] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:31.543 [2024-04-26 15:38:01.697277] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:31.543 [2024-04-26 15:38:01.697574] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:31.543 [2024-04-26 15:38:01.697704] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:31.543 [2024-04-26 15:38:01.697717] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:31.543 [2024-04-26 15:38:01.697741] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:31.543 [2024-04-26 15:38:01.697878] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:31.543 [2024-04-26 15:38:01.697973] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:31.543 [2024-04-26 15:38:01.698097] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:31.543 [2024-04-26 15:38:01.698104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:32.476 15:38:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:32.476 15:38:02 -- common/autotest_common.sh@850 -- # return 0 00:19:32.476 15:38:02 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:32.476 15:38:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:32.476 15:38:02 -- common/autotest_common.sh@10 -- # set +x 00:19:32.476 15:38:02 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:32.476 15:38:02 -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:19:32.476 15:38:02 -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:19:32.476 15:38:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:32.476 15:38:02 -- common/autotest_common.sh@10 -- # set +x 00:19:32.476 [2024-04-26 15:38:02.447147] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:32.476 15:38:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:32.476 15:38:02 -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:19:32.476 15:38:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:32.476 15:38:02 -- common/autotest_common.sh@10 -- # set +x 00:19:32.476 Malloc1 00:19:32.476 15:38:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:32.476 15:38:02 -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:32.476 15:38:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:32.476 15:38:02 -- common/autotest_common.sh@10 -- # set +x 00:19:32.476 15:38:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:32.476 15:38:02 -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:32.476 15:38:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:32.476 15:38:02 -- common/autotest_common.sh@10 -- # set +x 00:19:32.476 15:38:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:32.476 15:38:02 -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:32.476 15:38:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:32.476 15:38:02 -- common/autotest_common.sh@10 -- # set +x 00:19:32.476 [2024-04-26 15:38:02.641449] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:32.476 15:38:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:32.476 15:38:02 -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:19:32.476 15:38:02 -- common/autotest_common.sh@1364 -- # local bdev_name=Malloc1 00:19:32.476 15:38:02 -- common/autotest_common.sh@1365 -- # local bdev_info 00:19:32.476 15:38:02 -- common/autotest_common.sh@1366 -- # local bs 00:19:32.476 15:38:02 -- common/autotest_common.sh@1367 -- # local nb 00:19:32.476 15:38:02 -- common/autotest_common.sh@1368 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:19:32.476 15:38:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:32.476 15:38:02 -- common/autotest_common.sh@10 -- # set +x 00:19:32.476 15:38:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:32.476 15:38:02 -- common/autotest_common.sh@1368 -- # bdev_info='[ 00:19:32.476 { 00:19:32.476 "aliases": [ 00:19:32.476 "41cd5614-057e-4a3f-a343-274f856e4726" 00:19:32.476 ], 00:19:32.476 "assigned_rate_limits": { 00:19:32.476 "r_mbytes_per_sec": 0, 00:19:32.476 "rw_ios_per_sec": 0, 00:19:32.476 "rw_mbytes_per_sec": 0, 00:19:32.476 "w_mbytes_per_sec": 0 00:19:32.476 }, 00:19:32.476 "block_size": 512, 00:19:32.476 "claim_type": "exclusive_write", 00:19:32.476 "claimed": true, 00:19:32.476 "driver_specific": {}, 00:19:32.476 "memory_domains": [ 00:19:32.476 { 00:19:32.476 "dma_device_id": "system", 00:19:32.476 "dma_device_type": 1 00:19:32.476 }, 00:19:32.476 { 00:19:32.476 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:32.476 "dma_device_type": 2 00:19:32.476 } 00:19:32.476 ], 00:19:32.476 "name": "Malloc1", 00:19:32.476 "num_blocks": 1048576, 00:19:32.476 "product_name": "Malloc disk", 00:19:32.476 "supported_io_types": { 00:19:32.476 "abort": true, 00:19:32.476 "compare": false, 00:19:32.476 "compare_and_write": false, 00:19:32.476 "flush": true, 00:19:32.476 "nvme_admin": false, 00:19:32.476 "nvme_io": false, 00:19:32.476 "read": true, 00:19:32.476 "reset": true, 00:19:32.476 "unmap": true, 00:19:32.476 "write": true, 00:19:32.476 "write_zeroes": true 00:19:32.476 }, 00:19:32.476 "uuid": "41cd5614-057e-4a3f-a343-274f856e4726", 00:19:32.476 "zoned": false 00:19:32.476 } 00:19:32.476 ]' 00:19:32.476 15:38:02 -- common/autotest_common.sh@1369 -- # jq '.[] .block_size' 00:19:32.476 15:38:02 -- common/autotest_common.sh@1369 -- # bs=512 00:19:32.476 15:38:02 -- common/autotest_common.sh@1370 -- # jq '.[] .num_blocks' 00:19:32.476 15:38:02 -- common/autotest_common.sh@1370 -- # nb=1048576 00:19:32.476 15:38:02 -- common/autotest_common.sh@1373 -- # bdev_size=512 00:19:32.476 15:38:02 -- common/autotest_common.sh@1374 -- # echo 512 00:19:32.476 15:38:02 -- target/filesystem.sh@58 -- # malloc_size=536870912 00:19:32.476 15:38:02 -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 --hostid=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:19:32.734 15:38:02 -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:19:32.734 15:38:02 -- common/autotest_common.sh@1184 -- # local i=0 00:19:32.734 15:38:02 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:19:32.734 15:38:02 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:19:32.734 15:38:02 -- common/autotest_common.sh@1191 -- # sleep 2 00:19:35.259 15:38:04 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:19:35.259 15:38:04 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:19:35.259 15:38:04 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:19:35.259 15:38:04 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:19:35.259 15:38:04 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:19:35.259 15:38:04 -- common/autotest_common.sh@1194 -- # return 0 00:19:35.259 15:38:04 -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:19:35.259 15:38:04 -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:19:35.259 15:38:04 -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:19:35.259 15:38:04 -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:19:35.259 15:38:04 -- setup/common.sh@76 -- # local dev=nvme0n1 00:19:35.259 15:38:04 -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:19:35.259 15:38:04 -- setup/common.sh@80 -- # echo 536870912 00:19:35.259 15:38:04 -- target/filesystem.sh@64 -- # nvme_size=536870912 00:19:35.259 15:38:04 -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:19:35.259 15:38:04 -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:19:35.259 15:38:04 -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:19:35.259 15:38:04 -- target/filesystem.sh@69 -- # partprobe 00:19:35.259 15:38:05 -- target/filesystem.sh@70 -- # sleep 1 00:19:35.822 15:38:06 -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:19:35.822 15:38:06 -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:19:35.822 15:38:06 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:19:35.822 15:38:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:35.822 15:38:06 -- common/autotest_common.sh@10 -- # set +x 00:19:36.079 ************************************ 00:19:36.079 START TEST filesystem_in_capsule_ext4 00:19:36.079 ************************************ 00:19:36.079 15:38:06 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create ext4 nvme0n1 00:19:36.079 15:38:06 -- target/filesystem.sh@18 -- # fstype=ext4 00:19:36.079 15:38:06 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:19:36.079 15:38:06 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:19:36.079 15:38:06 -- common/autotest_common.sh@912 -- # local fstype=ext4 00:19:36.079 15:38:06 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:19:36.079 15:38:06 -- common/autotest_common.sh@914 -- # local i=0 00:19:36.079 15:38:06 -- common/autotest_common.sh@915 -- # local force 00:19:36.079 15:38:06 -- common/autotest_common.sh@917 -- # '[' ext4 = ext4 ']' 00:19:36.079 15:38:06 -- common/autotest_common.sh@918 -- # force=-F 00:19:36.079 15:38:06 -- common/autotest_common.sh@923 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:19:36.079 mke2fs 1.46.5 (30-Dec-2021) 00:19:36.079 Discarding device blocks: 0/522240 done 00:19:36.079 Creating filesystem with 522240 1k blocks and 130560 inodes 00:19:36.079 Filesystem UUID: ddf1f936-4ac6-446c-b581-f07203db5095 00:19:36.079 Superblock backups stored on blocks: 00:19:36.079 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:19:36.079 00:19:36.079 Allocating group tables: 0/64 done 00:19:36.079 Writing inode tables: 0/64 done 00:19:36.079 Creating journal (8192 blocks): done 00:19:36.079 Writing superblocks and filesystem accounting information: 0/64 done 00:19:36.079 00:19:36.079 15:38:06 -- common/autotest_common.sh@931 -- # return 0 00:19:36.079 15:38:06 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:19:36.337 15:38:06 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:19:36.337 15:38:06 -- target/filesystem.sh@25 -- # sync 00:19:36.337 15:38:06 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:19:36.337 15:38:06 -- target/filesystem.sh@27 -- # sync 00:19:36.337 15:38:06 -- target/filesystem.sh@29 -- # i=0 00:19:36.337 15:38:06 -- target/filesystem.sh@30 -- # umount /mnt/device 00:19:36.337 15:38:06 -- target/filesystem.sh@37 -- # kill -0 65593 00:19:36.337 15:38:06 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:19:36.337 15:38:06 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:19:36.337 15:38:06 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:19:36.337 15:38:06 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:19:36.337 ************************************ 00:19:36.337 END TEST filesystem_in_capsule_ext4 00:19:36.337 ************************************ 00:19:36.337 00:19:36.337 real 0m0.355s 00:19:36.337 user 0m0.024s 00:19:36.337 sys 0m0.053s 00:19:36.337 15:38:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:36.337 15:38:06 -- common/autotest_common.sh@10 -- # set +x 00:19:36.337 15:38:06 -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:19:36.337 15:38:06 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:19:36.337 15:38:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:36.337 15:38:06 -- common/autotest_common.sh@10 -- # set +x 00:19:36.595 ************************************ 00:19:36.595 START TEST filesystem_in_capsule_btrfs 00:19:36.595 ************************************ 00:19:36.595 15:38:06 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create btrfs nvme0n1 00:19:36.595 15:38:06 -- target/filesystem.sh@18 -- # fstype=btrfs 00:19:36.595 15:38:06 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:19:36.595 15:38:06 -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:19:36.595 15:38:06 -- common/autotest_common.sh@912 -- # local fstype=btrfs 00:19:36.595 15:38:06 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:19:36.595 15:38:06 -- common/autotest_common.sh@914 -- # local i=0 00:19:36.595 15:38:06 -- common/autotest_common.sh@915 -- # local force 00:19:36.595 15:38:06 -- common/autotest_common.sh@917 -- # '[' btrfs = ext4 ']' 00:19:36.595 15:38:06 -- common/autotest_common.sh@920 -- # force=-f 00:19:36.595 15:38:06 -- common/autotest_common.sh@923 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:19:36.595 btrfs-progs v6.6.2 00:19:36.595 See https://btrfs.readthedocs.io for more information. 00:19:36.595 00:19:36.595 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:19:36.595 NOTE: several default settings have changed in version 5.15, please make sure 00:19:36.595 this does not affect your deployments: 00:19:36.595 - DUP for metadata (-m dup) 00:19:36.595 - enabled no-holes (-O no-holes) 00:19:36.595 - enabled free-space-tree (-R free-space-tree) 00:19:36.595 00:19:36.595 Label: (null) 00:19:36.595 UUID: e21421cb-f7e9-4658-ad32-de0444e23326 00:19:36.595 Node size: 16384 00:19:36.595 Sector size: 4096 00:19:36.595 Filesystem size: 510.00MiB 00:19:36.595 Block group profiles: 00:19:36.595 Data: single 8.00MiB 00:19:36.595 Metadata: DUP 32.00MiB 00:19:36.595 System: DUP 8.00MiB 00:19:36.595 SSD detected: yes 00:19:36.595 Zoned device: no 00:19:36.595 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:19:36.595 Runtime features: free-space-tree 00:19:36.595 Checksum: crc32c 00:19:36.595 Number of devices: 1 00:19:36.595 Devices: 00:19:36.595 ID SIZE PATH 00:19:36.595 1 510.00MiB /dev/nvme0n1p1 00:19:36.595 00:19:36.595 15:38:06 -- common/autotest_common.sh@931 -- # return 0 00:19:36.595 15:38:06 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:19:36.595 15:38:06 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:19:36.595 15:38:06 -- target/filesystem.sh@25 -- # sync 00:19:36.595 15:38:06 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:19:36.595 15:38:06 -- target/filesystem.sh@27 -- # sync 00:19:36.595 15:38:06 -- target/filesystem.sh@29 -- # i=0 00:19:36.595 15:38:06 -- target/filesystem.sh@30 -- # umount /mnt/device 00:19:36.595 15:38:06 -- target/filesystem.sh@37 -- # kill -0 65593 00:19:36.595 15:38:06 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:19:36.595 15:38:06 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:19:36.595 15:38:06 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:19:36.595 15:38:06 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:19:36.595 ************************************ 00:19:36.595 END TEST filesystem_in_capsule_btrfs 00:19:36.595 ************************************ 00:19:36.595 00:19:36.595 real 0m0.216s 00:19:36.595 user 0m0.025s 00:19:36.595 sys 0m0.059s 00:19:36.595 15:38:06 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:36.595 15:38:06 -- common/autotest_common.sh@10 -- # set +x 00:19:36.595 15:38:06 -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:19:36.595 15:38:06 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:19:36.595 15:38:06 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:36.595 15:38:06 -- common/autotest_common.sh@10 -- # set +x 00:19:36.852 ************************************ 00:19:36.852 START TEST filesystem_in_capsule_xfs 00:19:36.852 ************************************ 00:19:36.852 15:38:06 -- common/autotest_common.sh@1111 -- # nvmf_filesystem_create xfs nvme0n1 00:19:36.852 15:38:06 -- target/filesystem.sh@18 -- # fstype=xfs 00:19:36.852 15:38:06 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:19:36.852 15:38:06 -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:19:36.852 15:38:06 -- common/autotest_common.sh@912 -- # local fstype=xfs 00:19:36.852 15:38:06 -- common/autotest_common.sh@913 -- # local dev_name=/dev/nvme0n1p1 00:19:36.852 15:38:06 -- common/autotest_common.sh@914 -- # local i=0 00:19:36.852 15:38:06 -- common/autotest_common.sh@915 -- # local force 00:19:36.852 15:38:06 -- common/autotest_common.sh@917 -- # '[' xfs = ext4 ']' 00:19:36.852 15:38:06 -- common/autotest_common.sh@920 -- # force=-f 00:19:36.853 15:38:06 -- common/autotest_common.sh@923 -- # mkfs.xfs -f /dev/nvme0n1p1 00:19:36.853 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:19:36.853 = sectsz=512 attr=2, projid32bit=1 00:19:36.853 = crc=1 finobt=1, sparse=1, rmapbt=0 00:19:36.853 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:19:36.853 data = bsize=4096 blocks=130560, imaxpct=25 00:19:36.853 = sunit=0 swidth=0 blks 00:19:36.853 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:19:36.853 log =internal log bsize=4096 blocks=16384, version=2 00:19:36.853 = sectsz=512 sunit=0 blks, lazy-count=1 00:19:36.853 realtime =none extsz=4096 blocks=0, rtextents=0 00:19:37.783 Discarding blocks...Done. 00:19:37.783 15:38:07 -- common/autotest_common.sh@931 -- # return 0 00:19:37.783 15:38:07 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:19:39.679 15:38:09 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:19:39.679 15:38:09 -- target/filesystem.sh@25 -- # sync 00:19:39.679 15:38:09 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:19:39.679 15:38:09 -- target/filesystem.sh@27 -- # sync 00:19:39.679 15:38:09 -- target/filesystem.sh@29 -- # i=0 00:19:39.679 15:38:09 -- target/filesystem.sh@30 -- # umount /mnt/device 00:19:39.679 15:38:09 -- target/filesystem.sh@37 -- # kill -0 65593 00:19:39.679 15:38:09 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:19:39.679 15:38:09 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:19:39.679 15:38:09 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:19:39.679 15:38:09 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:19:39.679 ************************************ 00:19:39.679 END TEST filesystem_in_capsule_xfs 00:19:39.679 ************************************ 00:19:39.679 00:19:39.679 real 0m2.623s 00:19:39.679 user 0m0.023s 00:19:39.679 sys 0m0.054s 00:19:39.679 15:38:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:39.679 15:38:09 -- common/autotest_common.sh@10 -- # set +x 00:19:39.679 15:38:09 -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:19:39.679 15:38:09 -- target/filesystem.sh@93 -- # sync 00:19:39.679 15:38:09 -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:39.679 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:39.679 15:38:09 -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:39.679 15:38:09 -- common/autotest_common.sh@1205 -- # local i=0 00:19:39.679 15:38:09 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:19:39.679 15:38:09 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:39.679 15:38:09 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:19:39.679 15:38:09 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:39.679 15:38:09 -- common/autotest_common.sh@1217 -- # return 0 00:19:39.679 15:38:09 -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:39.679 15:38:09 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:39.679 15:38:09 -- common/autotest_common.sh@10 -- # set +x 00:19:39.679 15:38:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:39.679 15:38:09 -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:19:39.679 15:38:09 -- target/filesystem.sh@101 -- # killprocess 65593 00:19:39.679 15:38:09 -- common/autotest_common.sh@936 -- # '[' -z 65593 ']' 00:19:39.679 15:38:09 -- common/autotest_common.sh@940 -- # kill -0 65593 00:19:39.679 15:38:09 -- common/autotest_common.sh@941 -- # uname 00:19:39.679 15:38:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:39.679 15:38:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 65593 00:19:39.679 killing process with pid 65593 00:19:39.679 15:38:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:39.679 15:38:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:39.679 15:38:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 65593' 00:19:39.679 15:38:09 -- common/autotest_common.sh@955 -- # kill 65593 00:19:39.679 15:38:09 -- common/autotest_common.sh@960 -- # wait 65593 00:19:39.937 15:38:10 -- target/filesystem.sh@102 -- # nvmfpid= 00:19:39.937 00:19:39.937 real 0m8.848s 00:19:39.937 user 0m33.314s 00:19:39.937 sys 0m1.630s 00:19:39.937 15:38:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:39.937 15:38:10 -- common/autotest_common.sh@10 -- # set +x 00:19:39.937 ************************************ 00:19:39.937 END TEST nvmf_filesystem_in_capsule 00:19:39.937 ************************************ 00:19:40.196 15:38:10 -- target/filesystem.sh@108 -- # nvmftestfini 00:19:40.196 15:38:10 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:40.196 15:38:10 -- nvmf/common.sh@117 -- # sync 00:19:40.196 15:38:10 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:40.196 15:38:10 -- nvmf/common.sh@120 -- # set +e 00:19:40.196 15:38:10 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:40.196 15:38:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:40.196 rmmod nvme_tcp 00:19:40.196 rmmod nvme_fabrics 00:19:40.196 rmmod nvme_keyring 00:19:40.196 15:38:10 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:40.196 15:38:10 -- nvmf/common.sh@124 -- # set -e 00:19:40.196 15:38:10 -- nvmf/common.sh@125 -- # return 0 00:19:40.196 15:38:10 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:19:40.196 15:38:10 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:40.196 15:38:10 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:40.196 15:38:10 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:40.196 15:38:10 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:40.196 15:38:10 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:40.196 15:38:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:40.196 15:38:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:40.196 15:38:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:40.196 15:38:10 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:40.196 00:19:40.196 real 0m19.230s 00:19:40.196 user 1m8.865s 00:19:40.196 sys 0m3.812s 00:19:40.196 ************************************ 00:19:40.196 END TEST nvmf_filesystem 00:19:40.196 ************************************ 00:19:40.196 15:38:10 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:40.196 15:38:10 -- common/autotest_common.sh@10 -- # set +x 00:19:40.196 15:38:10 -- nvmf/nvmf.sh@25 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:19:40.196 15:38:10 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:40.196 15:38:10 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:40.196 15:38:10 -- common/autotest_common.sh@10 -- # set +x 00:19:40.454 ************************************ 00:19:40.454 START TEST nvmf_discovery 00:19:40.454 ************************************ 00:19:40.454 15:38:10 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:19:40.454 * Looking for test storage... 00:19:40.454 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:40.454 15:38:10 -- target/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:40.454 15:38:10 -- nvmf/common.sh@7 -- # uname -s 00:19:40.454 15:38:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:40.454 15:38:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:40.454 15:38:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:40.454 15:38:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:40.454 15:38:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:40.454 15:38:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:40.454 15:38:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:40.454 15:38:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:40.455 15:38:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:40.455 15:38:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:40.455 15:38:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:19:40.455 15:38:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:19:40.455 15:38:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:40.455 15:38:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:40.455 15:38:10 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:40.455 15:38:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:40.455 15:38:10 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:40.455 15:38:10 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:40.455 15:38:10 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:40.455 15:38:10 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:40.455 15:38:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.455 15:38:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.455 15:38:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.455 15:38:10 -- paths/export.sh@5 -- # export PATH 00:19:40.455 15:38:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:40.455 15:38:10 -- nvmf/common.sh@47 -- # : 0 00:19:40.455 15:38:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:40.455 15:38:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:40.455 15:38:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:40.455 15:38:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:40.455 15:38:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:40.455 15:38:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:40.455 15:38:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:40.455 15:38:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:40.455 15:38:10 -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:19:40.455 15:38:10 -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:19:40.455 15:38:10 -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:19:40.455 15:38:10 -- target/discovery.sh@15 -- # hash nvme 00:19:40.455 15:38:10 -- target/discovery.sh@20 -- # nvmftestinit 00:19:40.455 15:38:10 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:40.455 15:38:10 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:40.455 15:38:10 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:40.455 15:38:10 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:40.455 15:38:10 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:40.455 15:38:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:40.455 15:38:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:40.455 15:38:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:40.455 15:38:10 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:19:40.455 15:38:10 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:19:40.455 15:38:10 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:19:40.455 15:38:10 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:19:40.455 15:38:10 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:19:40.455 15:38:10 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:19:40.455 15:38:10 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:40.455 15:38:10 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:40.455 15:38:10 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:40.455 15:38:10 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:40.455 15:38:10 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:40.455 15:38:10 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:40.455 15:38:10 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:40.455 15:38:10 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:40.455 15:38:10 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:40.455 15:38:10 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:40.455 15:38:10 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:40.455 15:38:10 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:40.455 15:38:10 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:40.455 15:38:10 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:40.455 Cannot find device "nvmf_tgt_br" 00:19:40.455 15:38:10 -- nvmf/common.sh@155 -- # true 00:19:40.455 15:38:10 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:40.455 Cannot find device "nvmf_tgt_br2" 00:19:40.455 15:38:10 -- nvmf/common.sh@156 -- # true 00:19:40.455 15:38:10 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:40.455 15:38:10 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:40.455 Cannot find device "nvmf_tgt_br" 00:19:40.455 15:38:10 -- nvmf/common.sh@158 -- # true 00:19:40.455 15:38:10 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:40.455 Cannot find device "nvmf_tgt_br2" 00:19:40.455 15:38:10 -- nvmf/common.sh@159 -- # true 00:19:40.455 15:38:10 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:40.455 15:38:10 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:40.455 15:38:10 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:40.455 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:40.455 15:38:10 -- nvmf/common.sh@162 -- # true 00:19:40.455 15:38:10 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:40.713 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:40.713 15:38:10 -- nvmf/common.sh@163 -- # true 00:19:40.713 15:38:10 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:40.713 15:38:10 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:40.713 15:38:10 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:40.713 15:38:10 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:40.713 15:38:10 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:40.713 15:38:10 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:40.713 15:38:10 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:40.713 15:38:10 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:40.713 15:38:10 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:40.713 15:38:10 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:40.713 15:38:10 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:40.713 15:38:10 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:40.713 15:38:10 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:40.714 15:38:10 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:40.714 15:38:10 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:40.714 15:38:10 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:40.714 15:38:10 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:40.714 15:38:10 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:40.714 15:38:10 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:40.714 15:38:10 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:40.714 15:38:10 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:40.714 15:38:10 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:40.714 15:38:10 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:40.714 15:38:10 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:40.714 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:40.714 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:19:40.714 00:19:40.714 --- 10.0.0.2 ping statistics --- 00:19:40.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.714 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:19:40.714 15:38:10 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:40.714 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:40.714 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.030 ms 00:19:40.714 00:19:40.714 --- 10.0.0.3 ping statistics --- 00:19:40.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.714 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:19:40.714 15:38:10 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:40.714 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:40.714 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.023 ms 00:19:40.714 00:19:40.714 --- 10.0.0.1 ping statistics --- 00:19:40.714 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:40.714 rtt min/avg/max/mdev = 0.023/0.023/0.023/0.000 ms 00:19:40.714 15:38:10 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:40.714 15:38:10 -- nvmf/common.sh@422 -- # return 0 00:19:40.714 15:38:10 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:40.714 15:38:10 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:40.714 15:38:10 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:40.714 15:38:10 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:40.714 15:38:10 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:40.714 15:38:10 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:40.714 15:38:10 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:40.714 15:38:10 -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:19:40.714 15:38:10 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:40.714 15:38:10 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:40.714 15:38:10 -- common/autotest_common.sh@10 -- # set +x 00:19:40.714 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:40.714 15:38:10 -- nvmf/common.sh@470 -- # nvmfpid=66071 00:19:40.714 15:38:10 -- nvmf/common.sh@471 -- # waitforlisten 66071 00:19:40.714 15:38:10 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:40.714 15:38:10 -- common/autotest_common.sh@817 -- # '[' -z 66071 ']' 00:19:40.714 15:38:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:40.714 15:38:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:40.714 15:38:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:40.714 15:38:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:40.714 15:38:10 -- common/autotest_common.sh@10 -- # set +x 00:19:40.972 [2024-04-26 15:38:11.040926] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:19:40.972 [2024-04-26 15:38:11.041000] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:40.972 [2024-04-26 15:38:11.176436] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:41.230 [2024-04-26 15:38:11.278925] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:41.230 [2024-04-26 15:38:11.279311] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:41.230 [2024-04-26 15:38:11.279443] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:41.230 [2024-04-26 15:38:11.279568] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:41.230 [2024-04-26 15:38:11.279601] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:41.230 [2024-04-26 15:38:11.279856] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:41.230 [2024-04-26 15:38:11.279974] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:41.230 [2024-04-26 15:38:11.280057] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:41.230 [2024-04-26 15:38:11.280058] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:41.797 15:38:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:41.797 15:38:12 -- common/autotest_common.sh@850 -- # return 0 00:19:41.797 15:38:12 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:41.797 15:38:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:41.797 15:38:12 -- common/autotest_common.sh@10 -- # set +x 00:19:41.797 15:38:12 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:41.797 15:38:12 -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:41.797 15:38:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:41.797 15:38:12 -- common/autotest_common.sh@10 -- # set +x 00:19:41.797 [2024-04-26 15:38:12.058718] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:41.797 15:38:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:41.797 15:38:12 -- target/discovery.sh@26 -- # seq 1 4 00:19:41.797 15:38:12 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:19:41.797 15:38:12 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:19:41.797 15:38:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:41.797 15:38:12 -- common/autotest_common.sh@10 -- # set +x 00:19:42.056 Null1 00:19:42.056 15:38:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.056 15:38:12 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:42.056 15:38:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.056 15:38:12 -- common/autotest_common.sh@10 -- # set +x 00:19:42.056 15:38:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.056 15:38:12 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:19:42.056 15:38:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.056 15:38:12 -- common/autotest_common.sh@10 -- # set +x 00:19:42.056 15:38:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.056 15:38:12 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:42.056 15:38:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.056 15:38:12 -- common/autotest_common.sh@10 -- # set +x 00:19:42.056 [2024-04-26 15:38:12.110778] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:42.056 15:38:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.056 15:38:12 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:19:42.056 15:38:12 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:19:42.056 15:38:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.056 15:38:12 -- common/autotest_common.sh@10 -- # set +x 00:19:42.056 Null2 00:19:42.056 15:38:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.056 15:38:12 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:19:42.056 15:38:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.056 15:38:12 -- common/autotest_common.sh@10 -- # set +x 00:19:42.056 15:38:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.056 15:38:12 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:19:42.056 15:38:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.056 15:38:12 -- common/autotest_common.sh@10 -- # set +x 00:19:42.056 15:38:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.056 15:38:12 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:19:42.056 15:38:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.056 15:38:12 -- common/autotest_common.sh@10 -- # set +x 00:19:42.056 15:38:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.056 15:38:12 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:19:42.056 15:38:12 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:19:42.056 15:38:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.056 15:38:12 -- common/autotest_common.sh@10 -- # set +x 00:19:42.056 Null3 00:19:42.056 15:38:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.056 15:38:12 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:19:42.056 15:38:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.056 15:38:12 -- common/autotest_common.sh@10 -- # set +x 00:19:42.056 15:38:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.056 15:38:12 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:19:42.056 15:38:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.056 15:38:12 -- common/autotest_common.sh@10 -- # set +x 00:19:42.056 15:38:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.056 15:38:12 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:19:42.056 15:38:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.056 15:38:12 -- common/autotest_common.sh@10 -- # set +x 00:19:42.056 15:38:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.056 15:38:12 -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:19:42.056 15:38:12 -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:19:42.056 15:38:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.056 15:38:12 -- common/autotest_common.sh@10 -- # set +x 00:19:42.056 Null4 00:19:42.056 15:38:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.057 15:38:12 -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:19:42.057 15:38:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.057 15:38:12 -- common/autotest_common.sh@10 -- # set +x 00:19:42.057 15:38:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.057 15:38:12 -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:19:42.057 15:38:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.057 15:38:12 -- common/autotest_common.sh@10 -- # set +x 00:19:42.057 15:38:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.057 15:38:12 -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:19:42.057 15:38:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.057 15:38:12 -- common/autotest_common.sh@10 -- # set +x 00:19:42.057 15:38:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.057 15:38:12 -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:19:42.057 15:38:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.057 15:38:12 -- common/autotest_common.sh@10 -- # set +x 00:19:42.057 15:38:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.057 15:38:12 -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:19:42.057 15:38:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.057 15:38:12 -- common/autotest_common.sh@10 -- # set +x 00:19:42.057 15:38:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.057 15:38:12 -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 --hostid=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 -t tcp -a 10.0.0.2 -s 4420 00:19:42.057 00:19:42.057 Discovery Log Number of Records 6, Generation counter 6 00:19:42.057 =====Discovery Log Entry 0====== 00:19:42.057 trtype: tcp 00:19:42.057 adrfam: ipv4 00:19:42.057 subtype: current discovery subsystem 00:19:42.057 treq: not required 00:19:42.057 portid: 0 00:19:42.057 trsvcid: 4420 00:19:42.057 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:42.057 traddr: 10.0.0.2 00:19:42.057 eflags: explicit discovery connections, duplicate discovery information 00:19:42.057 sectype: none 00:19:42.057 =====Discovery Log Entry 1====== 00:19:42.057 trtype: tcp 00:19:42.057 adrfam: ipv4 00:19:42.057 subtype: nvme subsystem 00:19:42.057 treq: not required 00:19:42.057 portid: 0 00:19:42.057 trsvcid: 4420 00:19:42.057 subnqn: nqn.2016-06.io.spdk:cnode1 00:19:42.057 traddr: 10.0.0.2 00:19:42.057 eflags: none 00:19:42.057 sectype: none 00:19:42.057 =====Discovery Log Entry 2====== 00:19:42.057 trtype: tcp 00:19:42.057 adrfam: ipv4 00:19:42.057 subtype: nvme subsystem 00:19:42.057 treq: not required 00:19:42.057 portid: 0 00:19:42.057 trsvcid: 4420 00:19:42.057 subnqn: nqn.2016-06.io.spdk:cnode2 00:19:42.057 traddr: 10.0.0.2 00:19:42.057 eflags: none 00:19:42.057 sectype: none 00:19:42.057 =====Discovery Log Entry 3====== 00:19:42.057 trtype: tcp 00:19:42.057 adrfam: ipv4 00:19:42.057 subtype: nvme subsystem 00:19:42.057 treq: not required 00:19:42.057 portid: 0 00:19:42.057 trsvcid: 4420 00:19:42.057 subnqn: nqn.2016-06.io.spdk:cnode3 00:19:42.057 traddr: 10.0.0.2 00:19:42.057 eflags: none 00:19:42.057 sectype: none 00:19:42.057 =====Discovery Log Entry 4====== 00:19:42.057 trtype: tcp 00:19:42.057 adrfam: ipv4 00:19:42.057 subtype: nvme subsystem 00:19:42.057 treq: not required 00:19:42.057 portid: 0 00:19:42.057 trsvcid: 4420 00:19:42.057 subnqn: nqn.2016-06.io.spdk:cnode4 00:19:42.057 traddr: 10.0.0.2 00:19:42.057 eflags: none 00:19:42.057 sectype: none 00:19:42.057 =====Discovery Log Entry 5====== 00:19:42.057 trtype: tcp 00:19:42.057 adrfam: ipv4 00:19:42.057 subtype: discovery subsystem referral 00:19:42.057 treq: not required 00:19:42.057 portid: 0 00:19:42.057 trsvcid: 4430 00:19:42.057 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:19:42.057 traddr: 10.0.0.2 00:19:42.057 eflags: none 00:19:42.057 sectype: none 00:19:42.057 Perform nvmf subsystem discovery via RPC 00:19:42.057 15:38:12 -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:19:42.057 15:38:12 -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:19:42.057 15:38:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.057 15:38:12 -- common/autotest_common.sh@10 -- # set +x 00:19:42.057 [2024-04-26 15:38:12.306825] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:19:42.057 [ 00:19:42.057 { 00:19:42.057 "allow_any_host": true, 00:19:42.057 "hosts": [], 00:19:42.057 "listen_addresses": [ 00:19:42.057 { 00:19:42.057 "adrfam": "IPv4", 00:19:42.057 "traddr": "10.0.0.2", 00:19:42.057 "transport": "TCP", 00:19:42.057 "trsvcid": "4420", 00:19:42.057 "trtype": "TCP" 00:19:42.057 } 00:19:42.057 ], 00:19:42.057 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:42.057 "subtype": "Discovery" 00:19:42.057 }, 00:19:42.057 { 00:19:42.057 "allow_any_host": true, 00:19:42.057 "hosts": [], 00:19:42.057 "listen_addresses": [ 00:19:42.057 { 00:19:42.057 "adrfam": "IPv4", 00:19:42.057 "traddr": "10.0.0.2", 00:19:42.057 "transport": "TCP", 00:19:42.057 "trsvcid": "4420", 00:19:42.057 "trtype": "TCP" 00:19:42.057 } 00:19:42.057 ], 00:19:42.057 "max_cntlid": 65519, 00:19:42.057 "max_namespaces": 32, 00:19:42.057 "min_cntlid": 1, 00:19:42.057 "model_number": "SPDK bdev Controller", 00:19:42.057 "namespaces": [ 00:19:42.057 { 00:19:42.057 "bdev_name": "Null1", 00:19:42.057 "name": "Null1", 00:19:42.057 "nguid": "6A99A2EAC88B45BB86C1F16EF0E4AB9F", 00:19:42.057 "nsid": 1, 00:19:42.057 "uuid": "6a99a2ea-c88b-45bb-86c1-f16ef0e4ab9f" 00:19:42.057 } 00:19:42.057 ], 00:19:42.057 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:42.057 "serial_number": "SPDK00000000000001", 00:19:42.057 "subtype": "NVMe" 00:19:42.057 }, 00:19:42.057 { 00:19:42.057 "allow_any_host": true, 00:19:42.057 "hosts": [], 00:19:42.057 "listen_addresses": [ 00:19:42.057 { 00:19:42.057 "adrfam": "IPv4", 00:19:42.057 "traddr": "10.0.0.2", 00:19:42.058 "transport": "TCP", 00:19:42.058 "trsvcid": "4420", 00:19:42.058 "trtype": "TCP" 00:19:42.058 } 00:19:42.058 ], 00:19:42.058 "max_cntlid": 65519, 00:19:42.058 "max_namespaces": 32, 00:19:42.058 "min_cntlid": 1, 00:19:42.058 "model_number": "SPDK bdev Controller", 00:19:42.058 "namespaces": [ 00:19:42.058 { 00:19:42.058 "bdev_name": "Null2", 00:19:42.058 "name": "Null2", 00:19:42.058 "nguid": "0E030AA8AA1C41088410A3BF82166689", 00:19:42.058 "nsid": 1, 00:19:42.058 "uuid": "0e030aa8-aa1c-4108-8410-a3bf82166689" 00:19:42.058 } 00:19:42.058 ], 00:19:42.058 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:19:42.058 "serial_number": "SPDK00000000000002", 00:19:42.058 "subtype": "NVMe" 00:19:42.058 }, 00:19:42.058 { 00:19:42.058 "allow_any_host": true, 00:19:42.058 "hosts": [], 00:19:42.058 "listen_addresses": [ 00:19:42.058 { 00:19:42.058 "adrfam": "IPv4", 00:19:42.058 "traddr": "10.0.0.2", 00:19:42.058 "transport": "TCP", 00:19:42.058 "trsvcid": "4420", 00:19:42.058 "trtype": "TCP" 00:19:42.058 } 00:19:42.058 ], 00:19:42.058 "max_cntlid": 65519, 00:19:42.058 "max_namespaces": 32, 00:19:42.058 "min_cntlid": 1, 00:19:42.058 "model_number": "SPDK bdev Controller", 00:19:42.058 "namespaces": [ 00:19:42.058 { 00:19:42.058 "bdev_name": "Null3", 00:19:42.058 "name": "Null3", 00:19:42.058 "nguid": "E40C3DC07ABB40E2AD9E00173D3DE933", 00:19:42.058 "nsid": 1, 00:19:42.058 "uuid": "e40c3dc0-7abb-40e2-ad9e-00173d3de933" 00:19:42.058 } 00:19:42.058 ], 00:19:42.058 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:19:42.058 "serial_number": "SPDK00000000000003", 00:19:42.058 "subtype": "NVMe" 00:19:42.058 }, 00:19:42.058 { 00:19:42.058 "allow_any_host": true, 00:19:42.058 "hosts": [], 00:19:42.058 "listen_addresses": [ 00:19:42.058 { 00:19:42.058 "adrfam": "IPv4", 00:19:42.058 "traddr": "10.0.0.2", 00:19:42.058 "transport": "TCP", 00:19:42.058 "trsvcid": "4420", 00:19:42.058 "trtype": "TCP" 00:19:42.058 } 00:19:42.058 ], 00:19:42.058 "max_cntlid": 65519, 00:19:42.058 "max_namespaces": 32, 00:19:42.058 "min_cntlid": 1, 00:19:42.058 "model_number": "SPDK bdev Controller", 00:19:42.058 "namespaces": [ 00:19:42.058 { 00:19:42.058 "bdev_name": "Null4", 00:19:42.058 "name": "Null4", 00:19:42.058 "nguid": "0A274CE8478F40AFBE4D71B6DEEF0186", 00:19:42.058 "nsid": 1, 00:19:42.058 "uuid": "0a274ce8-478f-40af-be4d-71b6deef0186" 00:19:42.058 } 00:19:42.058 ], 00:19:42.058 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:19:42.058 "serial_number": "SPDK00000000000004", 00:19:42.058 "subtype": "NVMe" 00:19:42.058 } 00:19:42.058 ] 00:19:42.058 15:38:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.058 15:38:12 -- target/discovery.sh@42 -- # seq 1 4 00:19:42.058 15:38:12 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:19:42.058 15:38:12 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:42.058 15:38:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.058 15:38:12 -- common/autotest_common.sh@10 -- # set +x 00:19:42.317 15:38:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.317 15:38:12 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:19:42.317 15:38:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.317 15:38:12 -- common/autotest_common.sh@10 -- # set +x 00:19:42.317 15:38:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.317 15:38:12 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:19:42.317 15:38:12 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:19:42.317 15:38:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.317 15:38:12 -- common/autotest_common.sh@10 -- # set +x 00:19:42.317 15:38:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.317 15:38:12 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:19:42.317 15:38:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.317 15:38:12 -- common/autotest_common.sh@10 -- # set +x 00:19:42.317 15:38:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.317 15:38:12 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:19:42.317 15:38:12 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:19:42.317 15:38:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.317 15:38:12 -- common/autotest_common.sh@10 -- # set +x 00:19:42.317 15:38:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.317 15:38:12 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:19:42.317 15:38:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.317 15:38:12 -- common/autotest_common.sh@10 -- # set +x 00:19:42.317 15:38:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.317 15:38:12 -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:19:42.317 15:38:12 -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:19:42.317 15:38:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.317 15:38:12 -- common/autotest_common.sh@10 -- # set +x 00:19:42.317 15:38:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.317 15:38:12 -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:19:42.317 15:38:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.317 15:38:12 -- common/autotest_common.sh@10 -- # set +x 00:19:42.317 15:38:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.317 15:38:12 -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:19:42.317 15:38:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.317 15:38:12 -- common/autotest_common.sh@10 -- # set +x 00:19:42.317 15:38:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.317 15:38:12 -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:19:42.317 15:38:12 -- target/discovery.sh@49 -- # jq -r '.[].name' 00:19:42.317 15:38:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:42.317 15:38:12 -- common/autotest_common.sh@10 -- # set +x 00:19:42.317 15:38:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:42.317 15:38:12 -- target/discovery.sh@49 -- # check_bdevs= 00:19:42.317 15:38:12 -- target/discovery.sh@50 -- # '[' -n '' ']' 00:19:42.317 15:38:12 -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:19:42.317 15:38:12 -- target/discovery.sh@57 -- # nvmftestfini 00:19:42.317 15:38:12 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:42.317 15:38:12 -- nvmf/common.sh@117 -- # sync 00:19:42.317 15:38:12 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:42.317 15:38:12 -- nvmf/common.sh@120 -- # set +e 00:19:42.317 15:38:12 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:42.317 15:38:12 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:42.317 rmmod nvme_tcp 00:19:42.317 rmmod nvme_fabrics 00:19:42.317 rmmod nvme_keyring 00:19:42.317 15:38:12 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:42.317 15:38:12 -- nvmf/common.sh@124 -- # set -e 00:19:42.317 15:38:12 -- nvmf/common.sh@125 -- # return 0 00:19:42.317 15:38:12 -- nvmf/common.sh@478 -- # '[' -n 66071 ']' 00:19:42.317 15:38:12 -- nvmf/common.sh@479 -- # killprocess 66071 00:19:42.317 15:38:12 -- common/autotest_common.sh@936 -- # '[' -z 66071 ']' 00:19:42.317 15:38:12 -- common/autotest_common.sh@940 -- # kill -0 66071 00:19:42.317 15:38:12 -- common/autotest_common.sh@941 -- # uname 00:19:42.317 15:38:12 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:42.317 15:38:12 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66071 00:19:42.317 killing process with pid 66071 00:19:42.317 15:38:12 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:42.317 15:38:12 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:42.317 15:38:12 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66071' 00:19:42.317 15:38:12 -- common/autotest_common.sh@955 -- # kill 66071 00:19:42.317 [2024-04-26 15:38:12.552484] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:19:42.317 15:38:12 -- common/autotest_common.sh@960 -- # wait 66071 00:19:42.576 15:38:12 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:42.576 15:38:12 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:42.576 15:38:12 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:42.576 15:38:12 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:42.576 15:38:12 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:42.576 15:38:12 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:42.576 15:38:12 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:42.576 15:38:12 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:42.576 15:38:12 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:42.576 ************************************ 00:19:42.576 END TEST nvmf_discovery 00:19:42.576 ************************************ 00:19:42.576 00:19:42.576 real 0m2.335s 00:19:42.576 user 0m6.244s 00:19:42.576 sys 0m0.600s 00:19:42.576 15:38:12 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:42.576 15:38:12 -- common/autotest_common.sh@10 -- # set +x 00:19:42.835 15:38:12 -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:19:42.835 15:38:12 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:42.835 15:38:12 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:42.835 15:38:12 -- common/autotest_common.sh@10 -- # set +x 00:19:42.835 ************************************ 00:19:42.835 START TEST nvmf_referrals 00:19:42.835 ************************************ 00:19:42.835 15:38:12 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:19:42.835 * Looking for test storage... 00:19:42.835 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:42.835 15:38:13 -- target/referrals.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:42.835 15:38:13 -- nvmf/common.sh@7 -- # uname -s 00:19:42.835 15:38:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:42.835 15:38:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:42.835 15:38:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:42.835 15:38:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:42.835 15:38:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:42.835 15:38:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:42.835 15:38:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:42.835 15:38:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:42.835 15:38:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:42.835 15:38:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:42.835 15:38:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:19:42.835 15:38:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:19:42.835 15:38:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:42.835 15:38:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:42.835 15:38:13 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:42.835 15:38:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:42.835 15:38:13 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:42.835 15:38:13 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:42.835 15:38:13 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:42.835 15:38:13 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:42.835 15:38:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.835 15:38:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.835 15:38:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.835 15:38:13 -- paths/export.sh@5 -- # export PATH 00:19:42.835 15:38:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:42.835 15:38:13 -- nvmf/common.sh@47 -- # : 0 00:19:42.835 15:38:13 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:42.835 15:38:13 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:42.835 15:38:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:42.835 15:38:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:42.835 15:38:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:42.835 15:38:13 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:42.835 15:38:13 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:42.835 15:38:13 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:42.835 15:38:13 -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:19:42.835 15:38:13 -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:19:42.835 15:38:13 -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:19:42.835 15:38:13 -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:19:42.835 15:38:13 -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:19:42.835 15:38:13 -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:19:42.835 15:38:13 -- target/referrals.sh@37 -- # nvmftestinit 00:19:42.835 15:38:13 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:42.835 15:38:13 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:42.835 15:38:13 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:42.835 15:38:13 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:42.835 15:38:13 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:42.835 15:38:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:42.835 15:38:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:42.835 15:38:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:42.835 15:38:13 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:19:42.835 15:38:13 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:19:42.835 15:38:13 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:19:42.835 15:38:13 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:19:42.835 15:38:13 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:19:42.835 15:38:13 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:19:42.835 15:38:13 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:42.835 15:38:13 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:42.835 15:38:13 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:42.836 15:38:13 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:42.836 15:38:13 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:42.836 15:38:13 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:42.836 15:38:13 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:42.836 15:38:13 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:42.836 15:38:13 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:42.836 15:38:13 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:42.836 15:38:13 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:42.836 15:38:13 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:42.836 15:38:13 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:42.836 15:38:13 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:42.836 Cannot find device "nvmf_tgt_br" 00:19:42.836 15:38:13 -- nvmf/common.sh@155 -- # true 00:19:42.836 15:38:13 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:42.836 Cannot find device "nvmf_tgt_br2" 00:19:42.836 15:38:13 -- nvmf/common.sh@156 -- # true 00:19:42.836 15:38:13 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:42.836 15:38:13 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:42.836 Cannot find device "nvmf_tgt_br" 00:19:42.836 15:38:13 -- nvmf/common.sh@158 -- # true 00:19:42.836 15:38:13 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:43.094 Cannot find device "nvmf_tgt_br2" 00:19:43.094 15:38:13 -- nvmf/common.sh@159 -- # true 00:19:43.094 15:38:13 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:43.094 15:38:13 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:43.094 15:38:13 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:43.094 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:43.094 15:38:13 -- nvmf/common.sh@162 -- # true 00:19:43.094 15:38:13 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:43.094 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:43.094 15:38:13 -- nvmf/common.sh@163 -- # true 00:19:43.094 15:38:13 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:43.094 15:38:13 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:43.094 15:38:13 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:43.094 15:38:13 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:43.094 15:38:13 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:43.094 15:38:13 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:43.094 15:38:13 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:43.094 15:38:13 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:43.094 15:38:13 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:43.094 15:38:13 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:43.094 15:38:13 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:43.094 15:38:13 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:43.094 15:38:13 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:43.094 15:38:13 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:43.094 15:38:13 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:43.094 15:38:13 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:43.094 15:38:13 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:43.094 15:38:13 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:43.094 15:38:13 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:43.094 15:38:13 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:43.411 15:38:13 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:43.411 15:38:13 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:43.411 15:38:13 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:43.411 15:38:13 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:43.411 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:43.411 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:19:43.411 00:19:43.411 --- 10.0.0.2 ping statistics --- 00:19:43.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:43.411 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:19:43.411 15:38:13 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:43.411 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:43.411 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.030 ms 00:19:43.411 00:19:43.411 --- 10.0.0.3 ping statistics --- 00:19:43.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:43.411 rtt min/avg/max/mdev = 0.030/0.030/0.030/0.000 ms 00:19:43.411 15:38:13 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:43.411 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:43.411 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:19:43.411 00:19:43.411 --- 10.0.0.1 ping statistics --- 00:19:43.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:43.411 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:19:43.411 15:38:13 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:43.411 15:38:13 -- nvmf/common.sh@422 -- # return 0 00:19:43.411 15:38:13 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:43.411 15:38:13 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:43.411 15:38:13 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:43.411 15:38:13 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:43.411 15:38:13 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:43.411 15:38:13 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:43.411 15:38:13 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:43.411 15:38:13 -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:19:43.411 15:38:13 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:43.411 15:38:13 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:43.411 15:38:13 -- common/autotest_common.sh@10 -- # set +x 00:19:43.411 15:38:13 -- nvmf/common.sh@470 -- # nvmfpid=66306 00:19:43.411 15:38:13 -- nvmf/common.sh@471 -- # waitforlisten 66306 00:19:43.411 15:38:13 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:43.411 15:38:13 -- common/autotest_common.sh@817 -- # '[' -z 66306 ']' 00:19:43.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:43.411 15:38:13 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:43.411 15:38:13 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:43.411 15:38:13 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:43.411 15:38:13 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:43.411 15:38:13 -- common/autotest_common.sh@10 -- # set +x 00:19:43.411 [2024-04-26 15:38:13.500624] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:19:43.411 [2024-04-26 15:38:13.500721] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:43.411 [2024-04-26 15:38:13.640286] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:43.669 [2024-04-26 15:38:13.763046] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:43.669 [2024-04-26 15:38:13.763322] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:43.669 [2024-04-26 15:38:13.763470] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:43.669 [2024-04-26 15:38:13.763595] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:43.669 [2024-04-26 15:38:13.763636] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:43.669 [2024-04-26 15:38:13.763905] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:43.669 [2024-04-26 15:38:13.764007] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:43.669 [2024-04-26 15:38:13.764077] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:43.669 [2024-04-26 15:38:13.764078] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:44.235 15:38:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:44.235 15:38:14 -- common/autotest_common.sh@850 -- # return 0 00:19:44.235 15:38:14 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:44.235 15:38:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:44.235 15:38:14 -- common/autotest_common.sh@10 -- # set +x 00:19:44.235 15:38:14 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:44.235 15:38:14 -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:44.235 15:38:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:44.235 15:38:14 -- common/autotest_common.sh@10 -- # set +x 00:19:44.235 [2024-04-26 15:38:14.503351] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:44.235 15:38:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:44.235 15:38:14 -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:19:44.235 15:38:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:44.235 15:38:14 -- common/autotest_common.sh@10 -- # set +x 00:19:44.235 [2024-04-26 15:38:14.525326] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:19:44.493 15:38:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:44.493 15:38:14 -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:19:44.493 15:38:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:44.493 15:38:14 -- common/autotest_common.sh@10 -- # set +x 00:19:44.493 15:38:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:44.493 15:38:14 -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:19:44.493 15:38:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:44.493 15:38:14 -- common/autotest_common.sh@10 -- # set +x 00:19:44.493 15:38:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:44.493 15:38:14 -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:19:44.493 15:38:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:44.493 15:38:14 -- common/autotest_common.sh@10 -- # set +x 00:19:44.493 15:38:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:44.493 15:38:14 -- target/referrals.sh@48 -- # jq length 00:19:44.493 15:38:14 -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:19:44.493 15:38:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:44.493 15:38:14 -- common/autotest_common.sh@10 -- # set +x 00:19:44.493 15:38:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:44.493 15:38:14 -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:19:44.493 15:38:14 -- target/referrals.sh@49 -- # get_referral_ips rpc 00:19:44.493 15:38:14 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:19:44.493 15:38:14 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:19:44.493 15:38:14 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:19:44.493 15:38:14 -- target/referrals.sh@21 -- # sort 00:19:44.493 15:38:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:44.493 15:38:14 -- common/autotest_common.sh@10 -- # set +x 00:19:44.493 15:38:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:44.493 15:38:14 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:19:44.493 15:38:14 -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:19:44.493 15:38:14 -- target/referrals.sh@50 -- # get_referral_ips nvme 00:19:44.494 15:38:14 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:19:44.494 15:38:14 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:19:44.494 15:38:14 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 --hostid=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 -t tcp -a 10.0.0.2 -s 8009 -o json 00:19:44.494 15:38:14 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:19:44.494 15:38:14 -- target/referrals.sh@26 -- # sort 00:19:44.494 15:38:14 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:19:44.494 15:38:14 -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:19:44.494 15:38:14 -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:19:44.494 15:38:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:44.494 15:38:14 -- common/autotest_common.sh@10 -- # set +x 00:19:44.494 15:38:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:44.752 15:38:14 -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:19:44.752 15:38:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:44.752 15:38:14 -- common/autotest_common.sh@10 -- # set +x 00:19:44.752 15:38:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:44.752 15:38:14 -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:19:44.752 15:38:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:44.752 15:38:14 -- common/autotest_common.sh@10 -- # set +x 00:19:44.752 15:38:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:44.752 15:38:14 -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:19:44.752 15:38:14 -- target/referrals.sh@56 -- # jq length 00:19:44.752 15:38:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:44.752 15:38:14 -- common/autotest_common.sh@10 -- # set +x 00:19:44.752 15:38:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:44.752 15:38:14 -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:19:44.752 15:38:14 -- target/referrals.sh@57 -- # get_referral_ips nvme 00:19:44.752 15:38:14 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:19:44.752 15:38:14 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:19:44.752 15:38:14 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:19:44.752 15:38:14 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 --hostid=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 -t tcp -a 10.0.0.2 -s 8009 -o json 00:19:44.752 15:38:14 -- target/referrals.sh@26 -- # sort 00:19:44.752 15:38:14 -- target/referrals.sh@26 -- # echo 00:19:44.752 15:38:14 -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:19:44.752 15:38:14 -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:19:44.752 15:38:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:44.752 15:38:14 -- common/autotest_common.sh@10 -- # set +x 00:19:44.752 15:38:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:44.752 15:38:14 -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:19:44.752 15:38:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:44.752 15:38:14 -- common/autotest_common.sh@10 -- # set +x 00:19:44.752 15:38:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:44.752 15:38:14 -- target/referrals.sh@65 -- # get_referral_ips rpc 00:19:44.752 15:38:14 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:19:44.752 15:38:14 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:19:44.752 15:38:14 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:19:44.752 15:38:14 -- target/referrals.sh@21 -- # sort 00:19:44.752 15:38:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:44.752 15:38:14 -- common/autotest_common.sh@10 -- # set +x 00:19:44.752 15:38:14 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:44.752 15:38:14 -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:19:44.752 15:38:14 -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:19:44.752 15:38:14 -- target/referrals.sh@66 -- # get_referral_ips nvme 00:19:44.752 15:38:14 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:19:44.752 15:38:14 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:19:44.752 15:38:14 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:19:44.752 15:38:14 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 --hostid=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 -t tcp -a 10.0.0.2 -s 8009 -o json 00:19:44.752 15:38:14 -- target/referrals.sh@26 -- # sort 00:19:45.010 15:38:15 -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:19:45.010 15:38:15 -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:19:45.010 15:38:15 -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:19:45.010 15:38:15 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:19:45.010 15:38:15 -- target/referrals.sh@67 -- # jq -r .subnqn 00:19:45.010 15:38:15 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 --hostid=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 -t tcp -a 10.0.0.2 -s 8009 -o json 00:19:45.010 15:38:15 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:19:45.010 15:38:15 -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:19:45.010 15:38:15 -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:19:45.010 15:38:15 -- target/referrals.sh@68 -- # jq -r .subnqn 00:19:45.010 15:38:15 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:19:45.010 15:38:15 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:19:45.010 15:38:15 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 --hostid=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 -t tcp -a 10.0.0.2 -s 8009 -o json 00:19:45.010 15:38:15 -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:19:45.010 15:38:15 -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:19:45.010 15:38:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:45.010 15:38:15 -- common/autotest_common.sh@10 -- # set +x 00:19:45.010 15:38:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:45.010 15:38:15 -- target/referrals.sh@73 -- # get_referral_ips rpc 00:19:45.010 15:38:15 -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:19:45.010 15:38:15 -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:19:45.010 15:38:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:45.010 15:38:15 -- common/autotest_common.sh@10 -- # set +x 00:19:45.010 15:38:15 -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:19:45.010 15:38:15 -- target/referrals.sh@21 -- # sort 00:19:45.010 15:38:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:45.010 15:38:15 -- target/referrals.sh@21 -- # echo 127.0.0.2 00:19:45.010 15:38:15 -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:19:45.010 15:38:15 -- target/referrals.sh@74 -- # get_referral_ips nvme 00:19:45.010 15:38:15 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:19:45.010 15:38:15 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:19:45.010 15:38:15 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:19:45.010 15:38:15 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 --hostid=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 -t tcp -a 10.0.0.2 -s 8009 -o json 00:19:45.010 15:38:15 -- target/referrals.sh@26 -- # sort 00:19:45.269 15:38:15 -- target/referrals.sh@26 -- # echo 127.0.0.2 00:19:45.269 15:38:15 -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:19:45.269 15:38:15 -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:19:45.269 15:38:15 -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:19:45.269 15:38:15 -- target/referrals.sh@75 -- # jq -r .subnqn 00:19:45.269 15:38:15 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:19:45.269 15:38:15 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 --hostid=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 -t tcp -a 10.0.0.2 -s 8009 -o json 00:19:45.269 15:38:15 -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:19:45.269 15:38:15 -- target/referrals.sh@76 -- # jq -r .subnqn 00:19:45.269 15:38:15 -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:19:45.269 15:38:15 -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:19:45.269 15:38:15 -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 --hostid=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 -t tcp -a 10.0.0.2 -s 8009 -o json 00:19:45.269 15:38:15 -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:19:45.269 15:38:15 -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:19:45.269 15:38:15 -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:19:45.269 15:38:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:45.269 15:38:15 -- common/autotest_common.sh@10 -- # set +x 00:19:45.269 15:38:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:45.269 15:38:15 -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:19:45.269 15:38:15 -- target/referrals.sh@82 -- # jq length 00:19:45.269 15:38:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:45.269 15:38:15 -- common/autotest_common.sh@10 -- # set +x 00:19:45.269 15:38:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:45.269 15:38:15 -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:19:45.269 15:38:15 -- target/referrals.sh@83 -- # get_referral_ips nvme 00:19:45.269 15:38:15 -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:19:45.269 15:38:15 -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:19:45.269 15:38:15 -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 --hostid=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 -t tcp -a 10.0.0.2 -s 8009 -o json 00:19:45.270 15:38:15 -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:19:45.270 15:38:15 -- target/referrals.sh@26 -- # sort 00:19:45.529 15:38:15 -- target/referrals.sh@26 -- # echo 00:19:45.529 15:38:15 -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:19:45.529 15:38:15 -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:19:45.529 15:38:15 -- target/referrals.sh@86 -- # nvmftestfini 00:19:45.529 15:38:15 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:45.529 15:38:15 -- nvmf/common.sh@117 -- # sync 00:19:45.529 15:38:15 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:45.529 15:38:15 -- nvmf/common.sh@120 -- # set +e 00:19:45.529 15:38:15 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:45.529 15:38:15 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:45.529 rmmod nvme_tcp 00:19:45.529 rmmod nvme_fabrics 00:19:45.529 rmmod nvme_keyring 00:19:45.529 15:38:15 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:45.529 15:38:15 -- nvmf/common.sh@124 -- # set -e 00:19:45.529 15:38:15 -- nvmf/common.sh@125 -- # return 0 00:19:45.529 15:38:15 -- nvmf/common.sh@478 -- # '[' -n 66306 ']' 00:19:45.529 15:38:15 -- nvmf/common.sh@479 -- # killprocess 66306 00:19:45.529 15:38:15 -- common/autotest_common.sh@936 -- # '[' -z 66306 ']' 00:19:45.529 15:38:15 -- common/autotest_common.sh@940 -- # kill -0 66306 00:19:45.529 15:38:15 -- common/autotest_common.sh@941 -- # uname 00:19:45.529 15:38:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:45.529 15:38:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66306 00:19:45.529 15:38:15 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:45.529 15:38:15 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:45.529 killing process with pid 66306 00:19:45.529 15:38:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66306' 00:19:45.529 15:38:15 -- common/autotest_common.sh@955 -- # kill 66306 00:19:45.529 15:38:15 -- common/autotest_common.sh@960 -- # wait 66306 00:19:45.788 15:38:15 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:45.788 15:38:15 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:45.788 15:38:15 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:45.788 15:38:15 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:45.788 15:38:15 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:45.788 15:38:15 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:45.788 15:38:15 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:45.788 15:38:15 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:45.788 15:38:16 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:45.788 ************************************ 00:19:45.788 END TEST nvmf_referrals 00:19:45.788 ************************************ 00:19:45.788 00:19:45.788 real 0m3.070s 00:19:45.788 user 0m9.588s 00:19:45.788 sys 0m0.860s 00:19:45.788 15:38:16 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:45.788 15:38:16 -- common/autotest_common.sh@10 -- # set +x 00:19:45.788 15:38:16 -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:19:45.788 15:38:16 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:45.788 15:38:16 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:45.788 15:38:16 -- common/autotest_common.sh@10 -- # set +x 00:19:46.047 ************************************ 00:19:46.047 START TEST nvmf_connect_disconnect 00:19:46.047 ************************************ 00:19:46.047 15:38:16 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:19:46.047 * Looking for test storage... 00:19:46.047 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:46.047 15:38:16 -- target/connect_disconnect.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:46.047 15:38:16 -- nvmf/common.sh@7 -- # uname -s 00:19:46.047 15:38:16 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:46.047 15:38:16 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:46.047 15:38:16 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:46.047 15:38:16 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:46.047 15:38:16 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:46.047 15:38:16 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:46.047 15:38:16 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:46.047 15:38:16 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:46.047 15:38:16 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:46.047 15:38:16 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:46.047 15:38:16 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:19:46.047 15:38:16 -- nvmf/common.sh@18 -- # NVME_HOSTID=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:19:46.047 15:38:16 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:46.047 15:38:16 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:46.047 15:38:16 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:46.047 15:38:16 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:46.047 15:38:16 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:46.047 15:38:16 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:46.047 15:38:16 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:46.047 15:38:16 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:46.047 15:38:16 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.047 15:38:16 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.047 15:38:16 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.047 15:38:16 -- paths/export.sh@5 -- # export PATH 00:19:46.047 15:38:16 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:46.047 15:38:16 -- nvmf/common.sh@47 -- # : 0 00:19:46.047 15:38:16 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:46.047 15:38:16 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:46.047 15:38:16 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:46.047 15:38:16 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:46.047 15:38:16 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:46.047 15:38:16 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:46.047 15:38:16 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:46.047 15:38:16 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:46.047 15:38:16 -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:46.047 15:38:16 -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:46.047 15:38:16 -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:19:46.047 15:38:16 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:46.047 15:38:16 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:46.047 15:38:16 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:46.047 15:38:16 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:46.047 15:38:16 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:46.047 15:38:16 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:46.047 15:38:16 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:46.047 15:38:16 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:46.047 15:38:16 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:19:46.047 15:38:16 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:19:46.047 15:38:16 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:19:46.047 15:38:16 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:19:46.047 15:38:16 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:19:46.047 15:38:16 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:19:46.047 15:38:16 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:46.047 15:38:16 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:46.047 15:38:16 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:46.047 15:38:16 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:46.047 15:38:16 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:46.047 15:38:16 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:46.047 15:38:16 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:46.047 15:38:16 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:46.047 15:38:16 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:46.047 15:38:16 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:46.047 15:38:16 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:46.047 15:38:16 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:46.047 15:38:16 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:46.048 15:38:16 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:46.048 Cannot find device "nvmf_tgt_br" 00:19:46.048 15:38:16 -- nvmf/common.sh@155 -- # true 00:19:46.048 15:38:16 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:46.048 Cannot find device "nvmf_tgt_br2" 00:19:46.048 15:38:16 -- nvmf/common.sh@156 -- # true 00:19:46.048 15:38:16 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:46.048 15:38:16 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:46.048 Cannot find device "nvmf_tgt_br" 00:19:46.048 15:38:16 -- nvmf/common.sh@158 -- # true 00:19:46.048 15:38:16 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:46.048 Cannot find device "nvmf_tgt_br2" 00:19:46.048 15:38:16 -- nvmf/common.sh@159 -- # true 00:19:46.048 15:38:16 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:46.306 15:38:16 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:46.306 15:38:16 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:46.306 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:46.306 15:38:16 -- nvmf/common.sh@162 -- # true 00:19:46.306 15:38:16 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:46.306 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:46.306 15:38:16 -- nvmf/common.sh@163 -- # true 00:19:46.306 15:38:16 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:46.306 15:38:16 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:46.306 15:38:16 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:46.307 15:38:16 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:46.307 15:38:16 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:46.307 15:38:16 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:46.307 15:38:16 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:46.307 15:38:16 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:46.307 15:38:16 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:46.307 15:38:16 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:46.307 15:38:16 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:46.307 15:38:16 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:46.307 15:38:16 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:46.307 15:38:16 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:46.307 15:38:16 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:46.307 15:38:16 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:46.307 15:38:16 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:46.307 15:38:16 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:46.307 15:38:16 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:46.307 15:38:16 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:19:46.307 15:38:16 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:19:46.307 15:38:16 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:19:46.307 15:38:16 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:19:46.307 15:38:16 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:19:46.307 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:46.307 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.072 ms 00:19:46.307 00:19:46.307 --- 10.0.0.2 ping statistics --- 00:19:46.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:46.307 rtt min/avg/max/mdev = 0.072/0.072/0.072/0.000 ms 00:19:46.307 15:38:16 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:19:46.307 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:19:46.307 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.034 ms 00:19:46.307 00:19:46.307 --- 10.0.0.3 ping statistics --- 00:19:46.307 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:46.307 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:19:46.307 15:38:16 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:19:46.565 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:46.565 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:19:46.565 00:19:46.565 --- 10.0.0.1 ping statistics --- 00:19:46.565 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:46.565 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:19:46.565 15:38:16 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:46.565 15:38:16 -- nvmf/common.sh@422 -- # return 0 00:19:46.565 15:38:16 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:19:46.565 15:38:16 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:46.565 15:38:16 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:19:46.565 15:38:16 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:19:46.565 15:38:16 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:46.565 15:38:16 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:19:46.565 15:38:16 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:19:46.565 15:38:16 -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:19:46.565 15:38:16 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:19:46.565 15:38:16 -- common/autotest_common.sh@710 -- # xtrace_disable 00:19:46.565 15:38:16 -- common/autotest_common.sh@10 -- # set +x 00:19:46.565 15:38:16 -- nvmf/common.sh@470 -- # nvmfpid=66615 00:19:46.565 15:38:16 -- nvmf/common.sh@471 -- # waitforlisten 66615 00:19:46.565 15:38:16 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:19:46.565 15:38:16 -- common/autotest_common.sh@817 -- # '[' -z 66615 ']' 00:19:46.565 15:38:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:46.565 15:38:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:19:46.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:46.565 15:38:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:46.565 15:38:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:19:46.565 15:38:16 -- common/autotest_common.sh@10 -- # set +x 00:19:46.565 [2024-04-26 15:38:16.680793] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:19:46.565 [2024-04-26 15:38:16.680883] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:46.565 [2024-04-26 15:38:16.819327] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:46.823 [2024-04-26 15:38:16.954530] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:46.823 [2024-04-26 15:38:16.954821] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:46.823 [2024-04-26 15:38:16.954997] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:46.823 [2024-04-26 15:38:16.955224] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:46.823 [2024-04-26 15:38:16.955388] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:46.823 [2024-04-26 15:38:16.955640] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:19:46.823 [2024-04-26 15:38:16.955756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:19:46.823 [2024-04-26 15:38:16.956247] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:19:46.823 [2024-04-26 15:38:16.956258] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:19:47.759 15:38:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:19:47.759 15:38:17 -- common/autotest_common.sh@850 -- # return 0 00:19:47.759 15:38:17 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:19:47.759 15:38:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:19:47.759 15:38:17 -- common/autotest_common.sh@10 -- # set +x 00:19:47.759 15:38:17 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:47.759 15:38:17 -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:19:47.759 15:38:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:47.759 15:38:17 -- common/autotest_common.sh@10 -- # set +x 00:19:47.759 [2024-04-26 15:38:17.754349] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:47.759 15:38:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:47.759 15:38:17 -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:19:47.759 15:38:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:47.759 15:38:17 -- common/autotest_common.sh@10 -- # set +x 00:19:47.759 15:38:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:47.759 15:38:17 -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:19:47.759 15:38:17 -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:19:47.759 15:38:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:47.759 15:38:17 -- common/autotest_common.sh@10 -- # set +x 00:19:47.759 15:38:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:47.759 15:38:17 -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:47.759 15:38:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:47.759 15:38:17 -- common/autotest_common.sh@10 -- # set +x 00:19:47.759 15:38:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:47.759 15:38:17 -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:47.759 15:38:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:19:47.759 15:38:17 -- common/autotest_common.sh@10 -- # set +x 00:19:47.759 [2024-04-26 15:38:17.833407] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:47.759 15:38:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:19:47.759 15:38:17 -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:19:47.759 15:38:17 -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:19:47.759 15:38:17 -- target/connect_disconnect.sh@34 -- # set +x 00:19:50.344 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:52.250 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:54.778 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:56.718 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:59.246 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:59.246 15:38:29 -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:19:59.246 15:38:29 -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:19:59.246 15:38:29 -- nvmf/common.sh@477 -- # nvmfcleanup 00:19:59.246 15:38:29 -- nvmf/common.sh@117 -- # sync 00:19:59.246 15:38:29 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:59.246 15:38:29 -- nvmf/common.sh@120 -- # set +e 00:19:59.246 15:38:29 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:59.246 15:38:29 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:59.246 rmmod nvme_tcp 00:19:59.246 rmmod nvme_fabrics 00:19:59.246 rmmod nvme_keyring 00:19:59.246 15:38:29 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:59.246 15:38:29 -- nvmf/common.sh@124 -- # set -e 00:19:59.246 15:38:29 -- nvmf/common.sh@125 -- # return 0 00:19:59.246 15:38:29 -- nvmf/common.sh@478 -- # '[' -n 66615 ']' 00:19:59.246 15:38:29 -- nvmf/common.sh@479 -- # killprocess 66615 00:19:59.246 15:38:29 -- common/autotest_common.sh@936 -- # '[' -z 66615 ']' 00:19:59.246 15:38:29 -- common/autotest_common.sh@940 -- # kill -0 66615 00:19:59.246 15:38:29 -- common/autotest_common.sh@941 -- # uname 00:19:59.246 15:38:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:19:59.246 15:38:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 66615 00:19:59.246 15:38:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:19:59.246 15:38:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:19:59.246 killing process with pid 66615 00:19:59.246 15:38:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 66615' 00:19:59.246 15:38:29 -- common/autotest_common.sh@955 -- # kill 66615 00:19:59.246 15:38:29 -- common/autotest_common.sh@960 -- # wait 66615 00:19:59.246 15:38:29 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:19:59.246 15:38:29 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:19:59.246 15:38:29 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:19:59.246 15:38:29 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:59.246 15:38:29 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:59.246 15:38:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:59.246 15:38:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:59.246 15:38:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:59.246 15:38:29 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:19:59.246 00:19:59.246 real 0m13.372s 00:19:59.246 user 0m48.906s 00:19:59.246 sys 0m1.785s 00:19:59.246 15:38:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:19:59.246 15:38:29 -- common/autotest_common.sh@10 -- # set +x 00:19:59.246 ************************************ 00:19:59.246 END TEST nvmf_connect_disconnect 00:19:59.246 ************************************ 00:19:59.505 15:38:29 -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:19:59.505 15:38:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:19:59.505 15:38:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:19:59.505 15:38:29 -- common/autotest_common.sh@10 -- # set +x 00:19:59.505 ************************************ 00:19:59.505 START TEST nvmf_multitarget 00:19:59.505 ************************************ 00:19:59.505 15:38:29 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:19:59.505 * Looking for test storage... 00:19:59.505 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:19:59.505 15:38:29 -- target/multitarget.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:19:59.505 15:38:29 -- nvmf/common.sh@7 -- # uname -s 00:19:59.505 15:38:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:59.505 15:38:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:59.505 15:38:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:59.505 15:38:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:59.505 15:38:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:59.505 15:38:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:59.505 15:38:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:59.505 15:38:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:59.505 15:38:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:59.505 15:38:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:59.505 15:38:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:19:59.505 15:38:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:19:59.505 15:38:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:59.505 15:38:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:59.505 15:38:29 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:19:59.505 15:38:29 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:59.505 15:38:29 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:59.505 15:38:29 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:59.505 15:38:29 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:59.505 15:38:29 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:59.505 15:38:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.505 15:38:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.505 15:38:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.505 15:38:29 -- paths/export.sh@5 -- # export PATH 00:19:59.505 15:38:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.505 15:38:29 -- nvmf/common.sh@47 -- # : 0 00:19:59.505 15:38:29 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:59.505 15:38:29 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:59.505 15:38:29 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:59.505 15:38:29 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:59.505 15:38:29 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:59.505 15:38:29 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:59.505 15:38:29 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:59.505 15:38:29 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:59.505 15:38:29 -- target/multitarget.sh@13 -- # rpc_py=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:19:59.505 15:38:29 -- target/multitarget.sh@15 -- # nvmftestinit 00:19:59.505 15:38:29 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:19:59.505 15:38:29 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:59.505 15:38:29 -- nvmf/common.sh@437 -- # prepare_net_devs 00:19:59.505 15:38:29 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:19:59.505 15:38:29 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:19:59.505 15:38:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:59.505 15:38:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:59.506 15:38:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:59.506 15:38:29 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:19:59.506 15:38:29 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:19:59.506 15:38:29 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:19:59.506 15:38:29 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:19:59.506 15:38:29 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:19:59.506 15:38:29 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:19:59.506 15:38:29 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:59.506 15:38:29 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:59.506 15:38:29 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:19:59.506 15:38:29 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:19:59.506 15:38:29 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:19:59.506 15:38:29 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:19:59.506 15:38:29 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:19:59.506 15:38:29 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:59.506 15:38:29 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:19:59.506 15:38:29 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:19:59.506 15:38:29 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:19:59.506 15:38:29 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:19:59.506 15:38:29 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:19:59.506 15:38:29 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:19:59.506 Cannot find device "nvmf_tgt_br" 00:19:59.506 15:38:29 -- nvmf/common.sh@155 -- # true 00:19:59.506 15:38:29 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:19:59.506 Cannot find device "nvmf_tgt_br2" 00:19:59.506 15:38:29 -- nvmf/common.sh@156 -- # true 00:19:59.506 15:38:29 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:19:59.506 15:38:29 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:19:59.506 Cannot find device "nvmf_tgt_br" 00:19:59.763 15:38:29 -- nvmf/common.sh@158 -- # true 00:19:59.763 15:38:29 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:19:59.763 Cannot find device "nvmf_tgt_br2" 00:19:59.763 15:38:29 -- nvmf/common.sh@159 -- # true 00:19:59.763 15:38:29 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:19:59.763 15:38:29 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:19:59.763 15:38:29 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:19:59.763 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:59.763 15:38:29 -- nvmf/common.sh@162 -- # true 00:19:59.763 15:38:29 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:19:59.763 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:19:59.763 15:38:29 -- nvmf/common.sh@163 -- # true 00:19:59.764 15:38:29 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:19:59.764 15:38:29 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:19:59.764 15:38:29 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:19:59.764 15:38:29 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:19:59.764 15:38:29 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:19:59.764 15:38:29 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:19:59.764 15:38:29 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:19:59.764 15:38:29 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:19:59.764 15:38:29 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:19:59.764 15:38:29 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:19:59.764 15:38:29 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:19:59.764 15:38:29 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:19:59.764 15:38:29 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:19:59.764 15:38:29 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:19:59.764 15:38:30 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:19:59.764 15:38:30 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:19:59.764 15:38:30 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:19:59.764 15:38:30 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:19:59.764 15:38:30 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:19:59.764 15:38:30 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:00.023 15:38:30 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:00.023 15:38:30 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:00.023 15:38:30 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:00.023 15:38:30 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:00.023 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:00.023 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:20:00.023 00:20:00.023 --- 10.0.0.2 ping statistics --- 00:20:00.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.023 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:20:00.023 15:38:30 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:00.023 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:00.023 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:20:00.023 00:20:00.023 --- 10.0.0.3 ping statistics --- 00:20:00.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.023 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:20:00.023 15:38:30 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:00.023 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:00.023 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:20:00.023 00:20:00.023 --- 10.0.0.1 ping statistics --- 00:20:00.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:00.023 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:20:00.023 15:38:30 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:00.023 15:38:30 -- nvmf/common.sh@422 -- # return 0 00:20:00.023 15:38:30 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:00.023 15:38:30 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:00.023 15:38:30 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:00.023 15:38:30 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:00.023 15:38:30 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:00.023 15:38:30 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:00.023 15:38:30 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:00.023 15:38:30 -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:20:00.023 15:38:30 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:00.023 15:38:30 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:00.023 15:38:30 -- common/autotest_common.sh@10 -- # set +x 00:20:00.023 15:38:30 -- nvmf/common.sh@470 -- # nvmfpid=67019 00:20:00.023 15:38:30 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:00.023 15:38:30 -- nvmf/common.sh@471 -- # waitforlisten 67019 00:20:00.023 15:38:30 -- common/autotest_common.sh@817 -- # '[' -z 67019 ']' 00:20:00.023 15:38:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:00.023 15:38:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:00.023 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:00.023 15:38:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:00.023 15:38:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:00.023 15:38:30 -- common/autotest_common.sh@10 -- # set +x 00:20:00.023 [2024-04-26 15:38:30.201132] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:20:00.023 [2024-04-26 15:38:30.201247] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:00.282 [2024-04-26 15:38:30.347517] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:00.282 [2024-04-26 15:38:30.479942] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:00.282 [2024-04-26 15:38:30.480002] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:00.282 [2024-04-26 15:38:30.480016] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:00.282 [2024-04-26 15:38:30.480026] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:00.282 [2024-04-26 15:38:30.480036] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:00.282 [2024-04-26 15:38:30.480249] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:00.282 [2024-04-26 15:38:30.480743] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:00.282 [2024-04-26 15:38:30.480830] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:00.282 [2024-04-26 15:38:30.480842] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:01.217 15:38:31 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:01.217 15:38:31 -- common/autotest_common.sh@850 -- # return 0 00:20:01.217 15:38:31 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:01.217 15:38:31 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:01.217 15:38:31 -- common/autotest_common.sh@10 -- # set +x 00:20:01.217 15:38:31 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:01.218 15:38:31 -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:20:01.218 15:38:31 -- target/multitarget.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:20:01.218 15:38:31 -- target/multitarget.sh@21 -- # jq length 00:20:01.218 15:38:31 -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:20:01.218 15:38:31 -- target/multitarget.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:20:01.218 "nvmf_tgt_1" 00:20:01.218 15:38:31 -- target/multitarget.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:20:01.475 "nvmf_tgt_2" 00:20:01.475 15:38:31 -- target/multitarget.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:20:01.475 15:38:31 -- target/multitarget.sh@28 -- # jq length 00:20:01.475 15:38:31 -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:20:01.475 15:38:31 -- target/multitarget.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:20:01.733 true 00:20:01.733 15:38:31 -- target/multitarget.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:20:01.733 true 00:20:01.733 15:38:32 -- target/multitarget.sh@35 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:20:01.733 15:38:32 -- target/multitarget.sh@35 -- # jq length 00:20:01.991 15:38:32 -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:20:01.991 15:38:32 -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:20:01.991 15:38:32 -- target/multitarget.sh@41 -- # nvmftestfini 00:20:01.991 15:38:32 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:01.991 15:38:32 -- nvmf/common.sh@117 -- # sync 00:20:01.991 15:38:32 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:01.991 15:38:32 -- nvmf/common.sh@120 -- # set +e 00:20:01.991 15:38:32 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:01.991 15:38:32 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:01.991 rmmod nvme_tcp 00:20:01.991 rmmod nvme_fabrics 00:20:01.991 rmmod nvme_keyring 00:20:01.991 15:38:32 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:01.991 15:38:32 -- nvmf/common.sh@124 -- # set -e 00:20:01.991 15:38:32 -- nvmf/common.sh@125 -- # return 0 00:20:01.991 15:38:32 -- nvmf/common.sh@478 -- # '[' -n 67019 ']' 00:20:01.991 15:38:32 -- nvmf/common.sh@479 -- # killprocess 67019 00:20:02.250 15:38:32 -- common/autotest_common.sh@936 -- # '[' -z 67019 ']' 00:20:02.250 15:38:32 -- common/autotest_common.sh@940 -- # kill -0 67019 00:20:02.250 15:38:32 -- common/autotest_common.sh@941 -- # uname 00:20:02.250 15:38:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:02.250 15:38:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67019 00:20:02.250 15:38:32 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:02.250 15:38:32 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:02.250 15:38:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67019' 00:20:02.250 killing process with pid 67019 00:20:02.250 15:38:32 -- common/autotest_common.sh@955 -- # kill 67019 00:20:02.250 15:38:32 -- common/autotest_common.sh@960 -- # wait 67019 00:20:02.518 15:38:32 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:02.518 15:38:32 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:02.518 15:38:32 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:02.518 15:38:32 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:02.518 15:38:32 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:02.518 15:38:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:02.518 15:38:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:02.518 15:38:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:02.518 15:38:32 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:02.518 ************************************ 00:20:02.518 END TEST nvmf_multitarget 00:20:02.518 ************************************ 00:20:02.518 00:20:02.518 real 0m2.991s 00:20:02.518 user 0m9.441s 00:20:02.518 sys 0m0.758s 00:20:02.518 15:38:32 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:02.518 15:38:32 -- common/autotest_common.sh@10 -- # set +x 00:20:02.518 15:38:32 -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:20:02.518 15:38:32 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:02.518 15:38:32 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:02.518 15:38:32 -- common/autotest_common.sh@10 -- # set +x 00:20:02.518 ************************************ 00:20:02.518 START TEST nvmf_rpc 00:20:02.518 ************************************ 00:20:02.518 15:38:32 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:20:02.518 * Looking for test storage... 00:20:02.777 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:02.777 15:38:32 -- target/rpc.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:02.777 15:38:32 -- nvmf/common.sh@7 -- # uname -s 00:20:02.777 15:38:32 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:02.777 15:38:32 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:02.777 15:38:32 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:02.777 15:38:32 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:02.777 15:38:32 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:02.777 15:38:32 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:02.777 15:38:32 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:02.777 15:38:32 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:02.777 15:38:32 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:02.777 15:38:32 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:02.777 15:38:32 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:20:02.777 15:38:32 -- nvmf/common.sh@18 -- # NVME_HOSTID=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:20:02.777 15:38:32 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:02.777 15:38:32 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:02.777 15:38:32 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:02.777 15:38:32 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:02.777 15:38:32 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:02.777 15:38:32 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:02.777 15:38:32 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:02.777 15:38:32 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:02.777 15:38:32 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.778 15:38:32 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.778 15:38:32 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.778 15:38:32 -- paths/export.sh@5 -- # export PATH 00:20:02.778 15:38:32 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:02.778 15:38:32 -- nvmf/common.sh@47 -- # : 0 00:20:02.778 15:38:32 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:02.778 15:38:32 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:02.778 15:38:32 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:02.778 15:38:32 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:02.778 15:38:32 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:02.778 15:38:32 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:02.778 15:38:32 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:02.778 15:38:32 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:02.778 15:38:32 -- target/rpc.sh@11 -- # loops=5 00:20:02.778 15:38:32 -- target/rpc.sh@23 -- # nvmftestinit 00:20:02.778 15:38:32 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:02.778 15:38:32 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:02.778 15:38:32 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:02.778 15:38:32 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:02.778 15:38:32 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:02.778 15:38:32 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:02.778 15:38:32 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:02.778 15:38:32 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:02.778 15:38:32 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:20:02.778 15:38:32 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:20:02.778 15:38:32 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:20:02.778 15:38:32 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:20:02.778 15:38:32 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:20:02.778 15:38:32 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:20:02.778 15:38:32 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:02.778 15:38:32 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:02.778 15:38:32 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:02.778 15:38:32 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:02.778 15:38:32 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:02.778 15:38:32 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:02.778 15:38:32 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:02.778 15:38:32 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:02.778 15:38:32 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:02.778 15:38:32 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:02.778 15:38:32 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:02.778 15:38:32 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:02.778 15:38:32 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:02.778 15:38:32 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:02.778 Cannot find device "nvmf_tgt_br" 00:20:02.778 15:38:32 -- nvmf/common.sh@155 -- # true 00:20:02.778 15:38:32 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:02.778 Cannot find device "nvmf_tgt_br2" 00:20:02.778 15:38:32 -- nvmf/common.sh@156 -- # true 00:20:02.778 15:38:32 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:02.778 15:38:32 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:02.778 Cannot find device "nvmf_tgt_br" 00:20:02.778 15:38:32 -- nvmf/common.sh@158 -- # true 00:20:02.778 15:38:32 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:02.778 Cannot find device "nvmf_tgt_br2" 00:20:02.778 15:38:32 -- nvmf/common.sh@159 -- # true 00:20:02.778 15:38:32 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:02.778 15:38:32 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:02.778 15:38:32 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:02.778 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:02.778 15:38:32 -- nvmf/common.sh@162 -- # true 00:20:02.778 15:38:32 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:02.778 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:02.778 15:38:32 -- nvmf/common.sh@163 -- # true 00:20:02.778 15:38:32 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:02.778 15:38:32 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:02.778 15:38:32 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:02.778 15:38:33 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:02.778 15:38:33 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:02.778 15:38:33 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:02.778 15:38:33 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:02.778 15:38:33 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:02.778 15:38:33 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:03.035 15:38:33 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:03.035 15:38:33 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:03.035 15:38:33 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:03.035 15:38:33 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:03.035 15:38:33 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:03.035 15:38:33 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:03.035 15:38:33 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:03.035 15:38:33 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:03.035 15:38:33 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:03.035 15:38:33 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:03.035 15:38:33 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:03.035 15:38:33 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:03.035 15:38:33 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:03.035 15:38:33 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:03.035 15:38:33 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:03.035 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:03.035 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.107 ms 00:20:03.035 00:20:03.035 --- 10.0.0.2 ping statistics --- 00:20:03.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:03.035 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:20:03.035 15:38:33 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:03.035 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:03.035 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:20:03.035 00:20:03.035 --- 10.0.0.3 ping statistics --- 00:20:03.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:03.036 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:20:03.036 15:38:33 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:03.036 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:03.036 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:20:03.036 00:20:03.036 --- 10.0.0.1 ping statistics --- 00:20:03.036 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:03.036 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:20:03.036 15:38:33 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:03.036 15:38:33 -- nvmf/common.sh@422 -- # return 0 00:20:03.036 15:38:33 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:03.036 15:38:33 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:03.036 15:38:33 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:03.036 15:38:33 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:03.036 15:38:33 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:03.036 15:38:33 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:03.036 15:38:33 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:03.036 15:38:33 -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:20:03.036 15:38:33 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:03.036 15:38:33 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:03.036 15:38:33 -- common/autotest_common.sh@10 -- # set +x 00:20:03.036 15:38:33 -- nvmf/common.sh@470 -- # nvmfpid=67256 00:20:03.036 15:38:33 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:03.036 15:38:33 -- nvmf/common.sh@471 -- # waitforlisten 67256 00:20:03.036 15:38:33 -- common/autotest_common.sh@817 -- # '[' -z 67256 ']' 00:20:03.036 15:38:33 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:03.036 15:38:33 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:03.036 15:38:33 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:03.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:03.036 15:38:33 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:03.036 15:38:33 -- common/autotest_common.sh@10 -- # set +x 00:20:03.036 [2024-04-26 15:38:33.266685] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:20:03.036 [2024-04-26 15:38:33.266799] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:03.293 [2024-04-26 15:38:33.409134] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:03.293 [2024-04-26 15:38:33.539442] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:03.293 [2024-04-26 15:38:33.539723] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:03.293 [2024-04-26 15:38:33.539822] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:03.293 [2024-04-26 15:38:33.539928] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:03.293 [2024-04-26 15:38:33.540005] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:03.293 [2024-04-26 15:38:33.540256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:03.293 [2024-04-26 15:38:33.540331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:03.293 [2024-04-26 15:38:33.540425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:03.293 [2024-04-26 15:38:33.540425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:04.226 15:38:34 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:04.226 15:38:34 -- common/autotest_common.sh@850 -- # return 0 00:20:04.226 15:38:34 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:04.226 15:38:34 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:04.226 15:38:34 -- common/autotest_common.sh@10 -- # set +x 00:20:04.226 15:38:34 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:04.226 15:38:34 -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:20:04.226 15:38:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.226 15:38:34 -- common/autotest_common.sh@10 -- # set +x 00:20:04.226 15:38:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:04.226 15:38:34 -- target/rpc.sh@26 -- # stats='{ 00:20:04.226 "poll_groups": [ 00:20:04.226 { 00:20:04.226 "admin_qpairs": 0, 00:20:04.226 "completed_nvme_io": 0, 00:20:04.226 "current_admin_qpairs": 0, 00:20:04.226 "current_io_qpairs": 0, 00:20:04.226 "io_qpairs": 0, 00:20:04.226 "name": "nvmf_tgt_poll_group_0", 00:20:04.226 "pending_bdev_io": 0, 00:20:04.226 "transports": [] 00:20:04.226 }, 00:20:04.226 { 00:20:04.226 "admin_qpairs": 0, 00:20:04.226 "completed_nvme_io": 0, 00:20:04.226 "current_admin_qpairs": 0, 00:20:04.226 "current_io_qpairs": 0, 00:20:04.226 "io_qpairs": 0, 00:20:04.226 "name": "nvmf_tgt_poll_group_1", 00:20:04.226 "pending_bdev_io": 0, 00:20:04.226 "transports": [] 00:20:04.226 }, 00:20:04.226 { 00:20:04.226 "admin_qpairs": 0, 00:20:04.226 "completed_nvme_io": 0, 00:20:04.226 "current_admin_qpairs": 0, 00:20:04.226 "current_io_qpairs": 0, 00:20:04.226 "io_qpairs": 0, 00:20:04.226 "name": "nvmf_tgt_poll_group_2", 00:20:04.226 "pending_bdev_io": 0, 00:20:04.226 "transports": [] 00:20:04.226 }, 00:20:04.226 { 00:20:04.226 "admin_qpairs": 0, 00:20:04.226 "completed_nvme_io": 0, 00:20:04.226 "current_admin_qpairs": 0, 00:20:04.226 "current_io_qpairs": 0, 00:20:04.226 "io_qpairs": 0, 00:20:04.226 "name": "nvmf_tgt_poll_group_3", 00:20:04.226 "pending_bdev_io": 0, 00:20:04.226 "transports": [] 00:20:04.226 } 00:20:04.226 ], 00:20:04.226 "tick_rate": 2200000000 00:20:04.226 }' 00:20:04.226 15:38:34 -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:20:04.226 15:38:34 -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:20:04.226 15:38:34 -- target/rpc.sh@15 -- # wc -l 00:20:04.226 15:38:34 -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:20:04.226 15:38:34 -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:20:04.226 15:38:34 -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:20:04.226 15:38:34 -- target/rpc.sh@29 -- # [[ null == null ]] 00:20:04.226 15:38:34 -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:04.226 15:38:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.226 15:38:34 -- common/autotest_common.sh@10 -- # set +x 00:20:04.226 [2024-04-26 15:38:34.403541] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:04.226 15:38:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:04.226 15:38:34 -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:20:04.226 15:38:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.226 15:38:34 -- common/autotest_common.sh@10 -- # set +x 00:20:04.226 15:38:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:04.226 15:38:34 -- target/rpc.sh@33 -- # stats='{ 00:20:04.226 "poll_groups": [ 00:20:04.226 { 00:20:04.226 "admin_qpairs": 0, 00:20:04.226 "completed_nvme_io": 0, 00:20:04.226 "current_admin_qpairs": 0, 00:20:04.226 "current_io_qpairs": 0, 00:20:04.226 "io_qpairs": 0, 00:20:04.226 "name": "nvmf_tgt_poll_group_0", 00:20:04.226 "pending_bdev_io": 0, 00:20:04.226 "transports": [ 00:20:04.226 { 00:20:04.226 "trtype": "TCP" 00:20:04.226 } 00:20:04.226 ] 00:20:04.226 }, 00:20:04.226 { 00:20:04.226 "admin_qpairs": 0, 00:20:04.226 "completed_nvme_io": 0, 00:20:04.226 "current_admin_qpairs": 0, 00:20:04.226 "current_io_qpairs": 0, 00:20:04.226 "io_qpairs": 0, 00:20:04.226 "name": "nvmf_tgt_poll_group_1", 00:20:04.226 "pending_bdev_io": 0, 00:20:04.226 "transports": [ 00:20:04.226 { 00:20:04.226 "trtype": "TCP" 00:20:04.226 } 00:20:04.226 ] 00:20:04.226 }, 00:20:04.226 { 00:20:04.226 "admin_qpairs": 0, 00:20:04.226 "completed_nvme_io": 0, 00:20:04.226 "current_admin_qpairs": 0, 00:20:04.226 "current_io_qpairs": 0, 00:20:04.226 "io_qpairs": 0, 00:20:04.226 "name": "nvmf_tgt_poll_group_2", 00:20:04.226 "pending_bdev_io": 0, 00:20:04.226 "transports": [ 00:20:04.226 { 00:20:04.226 "trtype": "TCP" 00:20:04.226 } 00:20:04.226 ] 00:20:04.226 }, 00:20:04.226 { 00:20:04.226 "admin_qpairs": 0, 00:20:04.226 "completed_nvme_io": 0, 00:20:04.226 "current_admin_qpairs": 0, 00:20:04.226 "current_io_qpairs": 0, 00:20:04.226 "io_qpairs": 0, 00:20:04.226 "name": "nvmf_tgt_poll_group_3", 00:20:04.226 "pending_bdev_io": 0, 00:20:04.226 "transports": [ 00:20:04.226 { 00:20:04.227 "trtype": "TCP" 00:20:04.227 } 00:20:04.227 ] 00:20:04.227 } 00:20:04.227 ], 00:20:04.227 "tick_rate": 2200000000 00:20:04.227 }' 00:20:04.227 15:38:34 -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:20:04.227 15:38:34 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:20:04.227 15:38:34 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:20:04.227 15:38:34 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:20:04.227 15:38:34 -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:20:04.227 15:38:34 -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:20:04.227 15:38:34 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:20:04.227 15:38:34 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:20:04.227 15:38:34 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:20:04.484 15:38:34 -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:20:04.484 15:38:34 -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:20:04.484 15:38:34 -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:20:04.484 15:38:34 -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:20:04.484 15:38:34 -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:04.484 15:38:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.484 15:38:34 -- common/autotest_common.sh@10 -- # set +x 00:20:04.484 Malloc1 00:20:04.484 15:38:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:04.484 15:38:34 -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:20:04.484 15:38:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.484 15:38:34 -- common/autotest_common.sh@10 -- # set +x 00:20:04.484 15:38:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:04.484 15:38:34 -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:04.484 15:38:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.484 15:38:34 -- common/autotest_common.sh@10 -- # set +x 00:20:04.484 15:38:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:04.484 15:38:34 -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:20:04.484 15:38:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.484 15:38:34 -- common/autotest_common.sh@10 -- # set +x 00:20:04.484 15:38:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:04.484 15:38:34 -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:04.484 15:38:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.484 15:38:34 -- common/autotest_common.sh@10 -- # set +x 00:20:04.484 [2024-04-26 15:38:34.607236] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:04.484 15:38:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:04.484 15:38:34 -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 --hostid=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 -a 10.0.0.2 -s 4420 00:20:04.485 15:38:34 -- common/autotest_common.sh@638 -- # local es=0 00:20:04.485 15:38:34 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 --hostid=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 -a 10.0.0.2 -s 4420 00:20:04.485 15:38:34 -- common/autotest_common.sh@626 -- # local arg=nvme 00:20:04.485 15:38:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:04.485 15:38:34 -- common/autotest_common.sh@630 -- # type -t nvme 00:20:04.485 15:38:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:04.485 15:38:34 -- common/autotest_common.sh@632 -- # type -P nvme 00:20:04.485 15:38:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:04.485 15:38:34 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:20:04.485 15:38:34 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:20:04.485 15:38:34 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 --hostid=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 -a 10.0.0.2 -s 4420 00:20:04.485 [2024-04-26 15:38:34.635582] ctrlr.c: 780:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9' 00:20:04.485 Failed to write to /dev/nvme-fabrics: Input/output error 00:20:04.485 could not add new controller: failed to write to nvme-fabrics device 00:20:04.485 15:38:34 -- common/autotest_common.sh@641 -- # es=1 00:20:04.485 15:38:34 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:04.485 15:38:34 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:04.485 15:38:34 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:04.485 15:38:34 -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:20:04.485 15:38:34 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:04.485 15:38:34 -- common/autotest_common.sh@10 -- # set +x 00:20:04.485 15:38:34 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:04.485 15:38:34 -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 --hostid=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:04.742 15:38:34 -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:20:04.742 15:38:34 -- common/autotest_common.sh@1184 -- # local i=0 00:20:04.742 15:38:34 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:20:04.742 15:38:34 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:20:04.742 15:38:34 -- common/autotest_common.sh@1191 -- # sleep 2 00:20:06.640 15:38:36 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:20:06.640 15:38:36 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:20:06.640 15:38:36 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:20:06.640 15:38:36 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:20:06.640 15:38:36 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:20:06.640 15:38:36 -- common/autotest_common.sh@1194 -- # return 0 00:20:06.640 15:38:36 -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:06.640 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:06.640 15:38:36 -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:06.640 15:38:36 -- common/autotest_common.sh@1205 -- # local i=0 00:20:06.640 15:38:36 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:20:06.640 15:38:36 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:06.640 15:38:36 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:06.640 15:38:36 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:20:06.640 15:38:36 -- common/autotest_common.sh@1217 -- # return 0 00:20:06.640 15:38:36 -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:20:06.640 15:38:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:06.641 15:38:36 -- common/autotest_common.sh@10 -- # set +x 00:20:06.641 15:38:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:06.641 15:38:36 -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 --hostid=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:06.641 15:38:36 -- common/autotest_common.sh@638 -- # local es=0 00:20:06.641 15:38:36 -- common/autotest_common.sh@640 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 --hostid=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:06.641 15:38:36 -- common/autotest_common.sh@626 -- # local arg=nvme 00:20:06.641 15:38:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:06.641 15:38:36 -- common/autotest_common.sh@630 -- # type -t nvme 00:20:06.641 15:38:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:06.641 15:38:36 -- common/autotest_common.sh@632 -- # type -P nvme 00:20:06.641 15:38:36 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:20:06.641 15:38:36 -- common/autotest_common.sh@632 -- # arg=/usr/sbin/nvme 00:20:06.641 15:38:36 -- common/autotest_common.sh@632 -- # [[ -x /usr/sbin/nvme ]] 00:20:06.641 15:38:36 -- common/autotest_common.sh@641 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 --hostid=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:06.899 [2024-04-26 15:38:36.936550] ctrlr.c: 780:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9' 00:20:06.899 Failed to write to /dev/nvme-fabrics: Input/output error 00:20:06.899 could not add new controller: failed to write to nvme-fabrics device 00:20:06.899 15:38:36 -- common/autotest_common.sh@641 -- # es=1 00:20:06.899 15:38:36 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:20:06.899 15:38:36 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:20:06.899 15:38:36 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:20:06.899 15:38:36 -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:20:06.899 15:38:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:06.899 15:38:36 -- common/autotest_common.sh@10 -- # set +x 00:20:06.899 15:38:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:06.899 15:38:36 -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 --hostid=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:06.899 15:38:37 -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:20:06.899 15:38:37 -- common/autotest_common.sh@1184 -- # local i=0 00:20:06.899 15:38:37 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:20:06.899 15:38:37 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:20:06.899 15:38:37 -- common/autotest_common.sh@1191 -- # sleep 2 00:20:09.423 15:38:39 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:20:09.423 15:38:39 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:20:09.423 15:38:39 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:20:09.423 15:38:39 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:20:09.423 15:38:39 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:20:09.423 15:38:39 -- common/autotest_common.sh@1194 -- # return 0 00:20:09.423 15:38:39 -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:09.423 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:09.423 15:38:39 -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:09.423 15:38:39 -- common/autotest_common.sh@1205 -- # local i=0 00:20:09.423 15:38:39 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:20:09.423 15:38:39 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:09.423 15:38:39 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:20:09.423 15:38:39 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:09.423 15:38:39 -- common/autotest_common.sh@1217 -- # return 0 00:20:09.423 15:38:39 -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:09.423 15:38:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:09.423 15:38:39 -- common/autotest_common.sh@10 -- # set +x 00:20:09.423 15:38:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:09.423 15:38:39 -- target/rpc.sh@81 -- # seq 1 5 00:20:09.424 15:38:39 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:20:09.424 15:38:39 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:20:09.424 15:38:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:09.424 15:38:39 -- common/autotest_common.sh@10 -- # set +x 00:20:09.424 15:38:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:09.424 15:38:39 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:09.424 15:38:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:09.424 15:38:39 -- common/autotest_common.sh@10 -- # set +x 00:20:09.424 [2024-04-26 15:38:39.237338] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:09.424 15:38:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:09.424 15:38:39 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:20:09.424 15:38:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:09.424 15:38:39 -- common/autotest_common.sh@10 -- # set +x 00:20:09.424 15:38:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:09.424 15:38:39 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:20:09.424 15:38:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:09.424 15:38:39 -- common/autotest_common.sh@10 -- # set +x 00:20:09.424 15:38:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:09.424 15:38:39 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 --hostid=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:09.424 15:38:39 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:20:09.424 15:38:39 -- common/autotest_common.sh@1184 -- # local i=0 00:20:09.424 15:38:39 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:20:09.424 15:38:39 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:20:09.424 15:38:39 -- common/autotest_common.sh@1191 -- # sleep 2 00:20:11.320 15:38:41 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:20:11.320 15:38:41 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:20:11.320 15:38:41 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:20:11.320 15:38:41 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:20:11.320 15:38:41 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:20:11.320 15:38:41 -- common/autotest_common.sh@1194 -- # return 0 00:20:11.320 15:38:41 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:11.320 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:11.320 15:38:41 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:11.320 15:38:41 -- common/autotest_common.sh@1205 -- # local i=0 00:20:11.320 15:38:41 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:20:11.320 15:38:41 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:11.320 15:38:41 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:11.320 15:38:41 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:20:11.320 15:38:41 -- common/autotest_common.sh@1217 -- # return 0 00:20:11.320 15:38:41 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:20:11.320 15:38:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:11.320 15:38:41 -- common/autotest_common.sh@10 -- # set +x 00:20:11.320 15:38:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:11.320 15:38:41 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:11.320 15:38:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:11.320 15:38:41 -- common/autotest_common.sh@10 -- # set +x 00:20:11.320 15:38:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:11.320 15:38:41 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:20:11.320 15:38:41 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:20:11.320 15:38:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:11.320 15:38:41 -- common/autotest_common.sh@10 -- # set +x 00:20:11.320 15:38:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:11.320 15:38:41 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:11.320 15:38:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:11.320 15:38:41 -- common/autotest_common.sh@10 -- # set +x 00:20:11.320 [2024-04-26 15:38:41.556380] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:11.320 15:38:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:11.320 15:38:41 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:20:11.320 15:38:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:11.320 15:38:41 -- common/autotest_common.sh@10 -- # set +x 00:20:11.320 15:38:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:11.320 15:38:41 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:20:11.320 15:38:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:11.320 15:38:41 -- common/autotest_common.sh@10 -- # set +x 00:20:11.320 15:38:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:11.320 15:38:41 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 --hostid=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:11.578 15:38:41 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:20:11.578 15:38:41 -- common/autotest_common.sh@1184 -- # local i=0 00:20:11.578 15:38:41 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:20:11.578 15:38:41 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:20:11.578 15:38:41 -- common/autotest_common.sh@1191 -- # sleep 2 00:20:13.484 15:38:43 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:20:13.484 15:38:43 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:20:13.484 15:38:43 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:20:13.484 15:38:43 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:20:13.484 15:38:43 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:20:13.484 15:38:43 -- common/autotest_common.sh@1194 -- # return 0 00:20:13.484 15:38:43 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:13.740 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:13.740 15:38:43 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:13.740 15:38:43 -- common/autotest_common.sh@1205 -- # local i=0 00:20:13.740 15:38:43 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:13.740 15:38:43 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:20:13.740 15:38:43 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:13.740 15:38:43 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:20:13.740 15:38:43 -- common/autotest_common.sh@1217 -- # return 0 00:20:13.740 15:38:43 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:20:13.740 15:38:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:13.740 15:38:43 -- common/autotest_common.sh@10 -- # set +x 00:20:13.740 15:38:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:13.740 15:38:43 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:13.740 15:38:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:13.740 15:38:43 -- common/autotest_common.sh@10 -- # set +x 00:20:13.740 15:38:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:13.740 15:38:43 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:20:13.740 15:38:43 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:20:13.740 15:38:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:13.740 15:38:43 -- common/autotest_common.sh@10 -- # set +x 00:20:13.740 15:38:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:13.740 15:38:43 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:13.740 15:38:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:13.740 15:38:43 -- common/autotest_common.sh@10 -- # set +x 00:20:13.740 [2024-04-26 15:38:43.863621] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:13.740 15:38:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:13.740 15:38:43 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:20:13.740 15:38:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:13.740 15:38:43 -- common/autotest_common.sh@10 -- # set +x 00:20:13.740 15:38:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:13.740 15:38:43 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:20:13.740 15:38:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:13.740 15:38:43 -- common/autotest_common.sh@10 -- # set +x 00:20:13.740 15:38:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:13.740 15:38:43 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 --hostid=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:13.998 15:38:44 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:20:13.998 15:38:44 -- common/autotest_common.sh@1184 -- # local i=0 00:20:13.998 15:38:44 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:20:13.998 15:38:44 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:20:13.998 15:38:44 -- common/autotest_common.sh@1191 -- # sleep 2 00:20:15.896 15:38:46 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:20:15.896 15:38:46 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:20:15.896 15:38:46 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:20:15.896 15:38:46 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:20:15.896 15:38:46 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:20:15.896 15:38:46 -- common/autotest_common.sh@1194 -- # return 0 00:20:15.896 15:38:46 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:15.896 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:15.896 15:38:46 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:15.896 15:38:46 -- common/autotest_common.sh@1205 -- # local i=0 00:20:15.896 15:38:46 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:20:15.896 15:38:46 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:15.896 15:38:46 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:20:15.896 15:38:46 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:15.896 15:38:46 -- common/autotest_common.sh@1217 -- # return 0 00:20:15.896 15:38:46 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:20:15.896 15:38:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:15.896 15:38:46 -- common/autotest_common.sh@10 -- # set +x 00:20:15.896 15:38:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:15.896 15:38:46 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:15.896 15:38:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:15.896 15:38:46 -- common/autotest_common.sh@10 -- # set +x 00:20:15.896 15:38:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:15.896 15:38:46 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:20:15.896 15:38:46 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:20:15.896 15:38:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:15.896 15:38:46 -- common/autotest_common.sh@10 -- # set +x 00:20:15.896 15:38:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:15.896 15:38:46 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:15.896 15:38:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:15.896 15:38:46 -- common/autotest_common.sh@10 -- # set +x 00:20:15.896 [2024-04-26 15:38:46.174891] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:15.896 15:38:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:15.896 15:38:46 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:20:15.896 15:38:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:15.896 15:38:46 -- common/autotest_common.sh@10 -- # set +x 00:20:15.896 15:38:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:15.896 15:38:46 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:20:15.896 15:38:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:15.896 15:38:46 -- common/autotest_common.sh@10 -- # set +x 00:20:16.206 15:38:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:16.206 15:38:46 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 --hostid=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:16.206 15:38:46 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:20:16.206 15:38:46 -- common/autotest_common.sh@1184 -- # local i=0 00:20:16.206 15:38:46 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:20:16.206 15:38:46 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:20:16.206 15:38:46 -- common/autotest_common.sh@1191 -- # sleep 2 00:20:18.121 15:38:48 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:20:18.121 15:38:48 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:20:18.121 15:38:48 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:20:18.121 15:38:48 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:20:18.121 15:38:48 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:20:18.121 15:38:48 -- common/autotest_common.sh@1194 -- # return 0 00:20:18.121 15:38:48 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:18.380 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:18.380 15:38:48 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:18.380 15:38:48 -- common/autotest_common.sh@1205 -- # local i=0 00:20:18.380 15:38:48 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:20:18.380 15:38:48 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:18.380 15:38:48 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:18.380 15:38:48 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:20:18.380 15:38:48 -- common/autotest_common.sh@1217 -- # return 0 00:20:18.380 15:38:48 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:20:18.380 15:38:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:18.380 15:38:48 -- common/autotest_common.sh@10 -- # set +x 00:20:18.380 15:38:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:18.380 15:38:48 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:18.380 15:38:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:18.380 15:38:48 -- common/autotest_common.sh@10 -- # set +x 00:20:18.380 15:38:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:18.380 15:38:48 -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:20:18.380 15:38:48 -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:20:18.380 15:38:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:18.380 15:38:48 -- common/autotest_common.sh@10 -- # set +x 00:20:18.380 15:38:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:18.380 15:38:48 -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:18.380 15:38:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:18.380 15:38:48 -- common/autotest_common.sh@10 -- # set +x 00:20:18.380 [2024-04-26 15:38:48.494658] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:18.380 15:38:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:18.380 15:38:48 -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:20:18.380 15:38:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:18.380 15:38:48 -- common/autotest_common.sh@10 -- # set +x 00:20:18.380 15:38:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:18.380 15:38:48 -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:20:18.380 15:38:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:18.380 15:38:48 -- common/autotest_common.sh@10 -- # set +x 00:20:18.380 15:38:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:18.380 15:38:48 -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 --hostid=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:20:18.638 15:38:48 -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:20:18.638 15:38:48 -- common/autotest_common.sh@1184 -- # local i=0 00:20:18.638 15:38:48 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:20:18.638 15:38:48 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:20:18.638 15:38:48 -- common/autotest_common.sh@1191 -- # sleep 2 00:20:20.544 15:38:50 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:20:20.544 15:38:50 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:20:20.544 15:38:50 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:20:20.544 15:38:50 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:20:20.544 15:38:50 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:20:20.544 15:38:50 -- common/autotest_common.sh@1194 -- # return 0 00:20:20.544 15:38:50 -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:20:20.803 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:20:20.803 15:38:50 -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:20:20.803 15:38:50 -- common/autotest_common.sh@1205 -- # local i=0 00:20:20.803 15:38:50 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:20:20.803 15:38:50 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:20.803 15:38:50 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:20:20.803 15:38:50 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:20:20.803 15:38:50 -- common/autotest_common.sh@1217 -- # return 0 00:20:20.803 15:38:50 -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:20:20.803 15:38:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.803 15:38:50 -- common/autotest_common.sh@10 -- # set +x 00:20:20.803 15:38:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.803 15:38:50 -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:20.803 15:38:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.803 15:38:50 -- common/autotest_common.sh@10 -- # set +x 00:20:20.803 15:38:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.803 15:38:50 -- target/rpc.sh@99 -- # seq 1 5 00:20:20.803 15:38:50 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:20:20.803 15:38:50 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:20:20.803 15:38:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.803 15:38:50 -- common/autotest_common.sh@10 -- # set +x 00:20:20.803 15:38:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.803 15:38:50 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:20.803 15:38:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.803 15:38:50 -- common/autotest_common.sh@10 -- # set +x 00:20:20.803 [2024-04-26 15:38:50.902008] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:20.803 15:38:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.803 15:38:50 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:20.803 15:38:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.803 15:38:50 -- common/autotest_common.sh@10 -- # set +x 00:20:20.803 15:38:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.803 15:38:50 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:20:20.803 15:38:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.803 15:38:50 -- common/autotest_common.sh@10 -- # set +x 00:20:20.803 15:38:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.803 15:38:50 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:20.803 15:38:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.803 15:38:50 -- common/autotest_common.sh@10 -- # set +x 00:20:20.803 15:38:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.803 15:38:50 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:20.803 15:38:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.803 15:38:50 -- common/autotest_common.sh@10 -- # set +x 00:20:20.803 15:38:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.803 15:38:50 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:20:20.803 15:38:50 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:20:20.803 15:38:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.803 15:38:50 -- common/autotest_common.sh@10 -- # set +x 00:20:20.803 15:38:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.803 15:38:50 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:20.803 15:38:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.803 15:38:50 -- common/autotest_common.sh@10 -- # set +x 00:20:20.803 [2024-04-26 15:38:50.950012] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:20.803 15:38:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.803 15:38:50 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:20.803 15:38:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.803 15:38:50 -- common/autotest_common.sh@10 -- # set +x 00:20:20.803 15:38:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.803 15:38:50 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:20:20.803 15:38:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.803 15:38:50 -- common/autotest_common.sh@10 -- # set +x 00:20:20.803 15:38:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.803 15:38:50 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:20.803 15:38:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.803 15:38:50 -- common/autotest_common.sh@10 -- # set +x 00:20:20.803 15:38:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.803 15:38:50 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:20.803 15:38:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.803 15:38:50 -- common/autotest_common.sh@10 -- # set +x 00:20:20.803 15:38:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.803 15:38:50 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:20:20.803 15:38:50 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:20:20.803 15:38:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.803 15:38:50 -- common/autotest_common.sh@10 -- # set +x 00:20:20.803 15:38:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.803 15:38:50 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:20.803 15:38:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.803 15:38:50 -- common/autotest_common.sh@10 -- # set +x 00:20:20.803 [2024-04-26 15:38:50.998063] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:20.803 15:38:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.803 15:38:51 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:20.803 15:38:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.803 15:38:51 -- common/autotest_common.sh@10 -- # set +x 00:20:20.803 15:38:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.803 15:38:51 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:20:20.803 15:38:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.803 15:38:51 -- common/autotest_common.sh@10 -- # set +x 00:20:20.803 15:38:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.803 15:38:51 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:20.803 15:38:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.803 15:38:51 -- common/autotest_common.sh@10 -- # set +x 00:20:20.803 15:38:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.803 15:38:51 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:20.803 15:38:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.803 15:38:51 -- common/autotest_common.sh@10 -- # set +x 00:20:20.803 15:38:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.803 15:38:51 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:20:20.803 15:38:51 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:20:20.803 15:38:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.803 15:38:51 -- common/autotest_common.sh@10 -- # set +x 00:20:20.803 15:38:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.803 15:38:51 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:20.803 15:38:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.803 15:38:51 -- common/autotest_common.sh@10 -- # set +x 00:20:20.803 [2024-04-26 15:38:51.054131] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:20.803 15:38:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.803 15:38:51 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:20.803 15:38:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.803 15:38:51 -- common/autotest_common.sh@10 -- # set +x 00:20:20.803 15:38:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.803 15:38:51 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:20:20.803 15:38:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.804 15:38:51 -- common/autotest_common.sh@10 -- # set +x 00:20:20.804 15:38:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.804 15:38:51 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:20.804 15:38:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.804 15:38:51 -- common/autotest_common.sh@10 -- # set +x 00:20:20.804 15:38:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.804 15:38:51 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:20.804 15:38:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.804 15:38:51 -- common/autotest_common.sh@10 -- # set +x 00:20:20.804 15:38:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:20.804 15:38:51 -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:20:20.804 15:38:51 -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:20:20.804 15:38:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:20.804 15:38:51 -- common/autotest_common.sh@10 -- # set +x 00:20:21.063 15:38:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:21.063 15:38:51 -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:21.063 15:38:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:21.063 15:38:51 -- common/autotest_common.sh@10 -- # set +x 00:20:21.063 [2024-04-26 15:38:51.106196] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:21.063 15:38:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:21.063 15:38:51 -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:21.063 15:38:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:21.063 15:38:51 -- common/autotest_common.sh@10 -- # set +x 00:20:21.063 15:38:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:21.063 15:38:51 -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:20:21.063 15:38:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:21.063 15:38:51 -- common/autotest_common.sh@10 -- # set +x 00:20:21.063 15:38:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:21.063 15:38:51 -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:21.063 15:38:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:21.063 15:38:51 -- common/autotest_common.sh@10 -- # set +x 00:20:21.063 15:38:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:21.063 15:38:51 -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:21.063 15:38:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:21.063 15:38:51 -- common/autotest_common.sh@10 -- # set +x 00:20:21.063 15:38:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:21.063 15:38:51 -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:20:21.063 15:38:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:21.063 15:38:51 -- common/autotest_common.sh@10 -- # set +x 00:20:21.063 15:38:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:21.063 15:38:51 -- target/rpc.sh@110 -- # stats='{ 00:20:21.063 "poll_groups": [ 00:20:21.063 { 00:20:21.063 "admin_qpairs": 2, 00:20:21.063 "completed_nvme_io": 164, 00:20:21.063 "current_admin_qpairs": 0, 00:20:21.063 "current_io_qpairs": 0, 00:20:21.063 "io_qpairs": 16, 00:20:21.063 "name": "nvmf_tgt_poll_group_0", 00:20:21.063 "pending_bdev_io": 0, 00:20:21.063 "transports": [ 00:20:21.063 { 00:20:21.063 "trtype": "TCP" 00:20:21.063 } 00:20:21.063 ] 00:20:21.063 }, 00:20:21.063 { 00:20:21.063 "admin_qpairs": 3, 00:20:21.063 "completed_nvme_io": 68, 00:20:21.063 "current_admin_qpairs": 0, 00:20:21.063 "current_io_qpairs": 0, 00:20:21.063 "io_qpairs": 17, 00:20:21.063 "name": "nvmf_tgt_poll_group_1", 00:20:21.063 "pending_bdev_io": 0, 00:20:21.063 "transports": [ 00:20:21.063 { 00:20:21.063 "trtype": "TCP" 00:20:21.063 } 00:20:21.063 ] 00:20:21.063 }, 00:20:21.063 { 00:20:21.063 "admin_qpairs": 1, 00:20:21.063 "completed_nvme_io": 71, 00:20:21.063 "current_admin_qpairs": 0, 00:20:21.063 "current_io_qpairs": 0, 00:20:21.063 "io_qpairs": 19, 00:20:21.063 "name": "nvmf_tgt_poll_group_2", 00:20:21.063 "pending_bdev_io": 0, 00:20:21.063 "transports": [ 00:20:21.063 { 00:20:21.063 "trtype": "TCP" 00:20:21.063 } 00:20:21.063 ] 00:20:21.063 }, 00:20:21.063 { 00:20:21.063 "admin_qpairs": 1, 00:20:21.063 "completed_nvme_io": 117, 00:20:21.063 "current_admin_qpairs": 0, 00:20:21.063 "current_io_qpairs": 0, 00:20:21.063 "io_qpairs": 18, 00:20:21.063 "name": "nvmf_tgt_poll_group_3", 00:20:21.063 "pending_bdev_io": 0, 00:20:21.063 "transports": [ 00:20:21.063 { 00:20:21.063 "trtype": "TCP" 00:20:21.063 } 00:20:21.063 ] 00:20:21.063 } 00:20:21.063 ], 00:20:21.063 "tick_rate": 2200000000 00:20:21.063 }' 00:20:21.063 15:38:51 -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:20:21.063 15:38:51 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:20:21.063 15:38:51 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:20:21.063 15:38:51 -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:20:21.063 15:38:51 -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:20:21.063 15:38:51 -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:20:21.063 15:38:51 -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:20:21.063 15:38:51 -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:20:21.063 15:38:51 -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:20:21.063 15:38:51 -- target/rpc.sh@113 -- # (( 70 > 0 )) 00:20:21.063 15:38:51 -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:20:21.063 15:38:51 -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:20:21.063 15:38:51 -- target/rpc.sh@123 -- # nvmftestfini 00:20:21.063 15:38:51 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:21.063 15:38:51 -- nvmf/common.sh@117 -- # sync 00:20:21.063 15:38:51 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:21.063 15:38:51 -- nvmf/common.sh@120 -- # set +e 00:20:21.063 15:38:51 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:21.063 15:38:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:21.063 rmmod nvme_tcp 00:20:21.063 rmmod nvme_fabrics 00:20:21.323 rmmod nvme_keyring 00:20:21.323 15:38:51 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:21.323 15:38:51 -- nvmf/common.sh@124 -- # set -e 00:20:21.323 15:38:51 -- nvmf/common.sh@125 -- # return 0 00:20:21.323 15:38:51 -- nvmf/common.sh@478 -- # '[' -n 67256 ']' 00:20:21.323 15:38:51 -- nvmf/common.sh@479 -- # killprocess 67256 00:20:21.323 15:38:51 -- common/autotest_common.sh@936 -- # '[' -z 67256 ']' 00:20:21.323 15:38:51 -- common/autotest_common.sh@940 -- # kill -0 67256 00:20:21.323 15:38:51 -- common/autotest_common.sh@941 -- # uname 00:20:21.323 15:38:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:21.323 15:38:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 67256 00:20:21.323 killing process with pid 67256 00:20:21.323 15:38:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:20:21.323 15:38:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:20:21.323 15:38:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 67256' 00:20:21.323 15:38:51 -- common/autotest_common.sh@955 -- # kill 67256 00:20:21.323 15:38:51 -- common/autotest_common.sh@960 -- # wait 67256 00:20:21.582 15:38:51 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:21.582 15:38:51 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:21.582 15:38:51 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:21.582 15:38:51 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:21.582 15:38:51 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:21.582 15:38:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:21.582 15:38:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:21.582 15:38:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:21.582 15:38:51 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:21.582 00:20:21.582 real 0m19.017s 00:20:21.582 user 1m11.238s 00:20:21.582 sys 0m2.687s 00:20:21.582 15:38:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:21.582 ************************************ 00:20:21.582 END TEST nvmf_rpc 00:20:21.582 15:38:51 -- common/autotest_common.sh@10 -- # set +x 00:20:21.582 ************************************ 00:20:21.582 15:38:51 -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:20:21.582 15:38:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:21.582 15:38:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:21.582 15:38:51 -- common/autotest_common.sh@10 -- # set +x 00:20:21.582 ************************************ 00:20:21.582 START TEST nvmf_invalid 00:20:21.582 ************************************ 00:20:21.582 15:38:51 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:20:21.841 * Looking for test storage... 00:20:21.841 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:21.841 15:38:51 -- target/invalid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:21.841 15:38:51 -- nvmf/common.sh@7 -- # uname -s 00:20:21.841 15:38:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:21.841 15:38:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:21.841 15:38:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:21.841 15:38:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:21.841 15:38:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:21.841 15:38:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:21.841 15:38:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:21.841 15:38:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:21.841 15:38:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:21.841 15:38:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:21.841 15:38:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:20:21.841 15:38:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:20:21.841 15:38:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:21.841 15:38:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:21.841 15:38:51 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:21.841 15:38:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:21.841 15:38:51 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:21.841 15:38:51 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:21.841 15:38:51 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:21.841 15:38:51 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:21.841 15:38:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.842 15:38:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.842 15:38:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.842 15:38:51 -- paths/export.sh@5 -- # export PATH 00:20:21.842 15:38:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:21.842 15:38:51 -- nvmf/common.sh@47 -- # : 0 00:20:21.842 15:38:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:21.842 15:38:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:21.842 15:38:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:21.842 15:38:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:21.842 15:38:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:21.842 15:38:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:21.842 15:38:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:21.842 15:38:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:21.842 15:38:51 -- target/invalid.sh@11 -- # multi_target_rpc=/home/vagrant/spdk_repo/spdk/test/nvmf/target/multitarget_rpc.py 00:20:21.842 15:38:51 -- target/invalid.sh@12 -- # rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:21.842 15:38:51 -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:20:21.842 15:38:51 -- target/invalid.sh@14 -- # target=foobar 00:20:21.842 15:38:51 -- target/invalid.sh@16 -- # RANDOM=0 00:20:21.842 15:38:51 -- target/invalid.sh@34 -- # nvmftestinit 00:20:21.842 15:38:51 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:21.842 15:38:51 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:21.842 15:38:51 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:21.842 15:38:51 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:21.842 15:38:51 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:21.842 15:38:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:21.842 15:38:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:21.842 15:38:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:21.842 15:38:51 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:20:21.842 15:38:51 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:20:21.842 15:38:51 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:20:21.842 15:38:51 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:20:21.842 15:38:51 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:20:21.842 15:38:51 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:20:21.842 15:38:51 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:21.842 15:38:51 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:21.842 15:38:51 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:21.842 15:38:51 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:21.842 15:38:51 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:21.842 15:38:51 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:21.842 15:38:51 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:21.842 15:38:51 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:21.842 15:38:51 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:21.842 15:38:51 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:21.842 15:38:51 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:21.842 15:38:51 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:21.842 15:38:51 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:21.842 15:38:52 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:21.842 Cannot find device "nvmf_tgt_br" 00:20:21.842 15:38:52 -- nvmf/common.sh@155 -- # true 00:20:21.842 15:38:52 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:21.842 Cannot find device "nvmf_tgt_br2" 00:20:21.842 15:38:52 -- nvmf/common.sh@156 -- # true 00:20:21.842 15:38:52 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:21.842 15:38:52 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:21.842 Cannot find device "nvmf_tgt_br" 00:20:21.842 15:38:52 -- nvmf/common.sh@158 -- # true 00:20:21.842 15:38:52 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:21.842 Cannot find device "nvmf_tgt_br2" 00:20:21.842 15:38:52 -- nvmf/common.sh@159 -- # true 00:20:21.842 15:38:52 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:21.842 15:38:52 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:21.842 15:38:52 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:21.842 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:21.842 15:38:52 -- nvmf/common.sh@162 -- # true 00:20:21.842 15:38:52 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:21.842 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:21.842 15:38:52 -- nvmf/common.sh@163 -- # true 00:20:21.842 15:38:52 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:21.842 15:38:52 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:21.842 15:38:52 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:22.101 15:38:52 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:22.101 15:38:52 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:22.101 15:38:52 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:22.101 15:38:52 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:22.101 15:38:52 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:22.101 15:38:52 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:22.101 15:38:52 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:22.101 15:38:52 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:22.101 15:38:52 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:22.101 15:38:52 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:22.101 15:38:52 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:22.101 15:38:52 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:22.101 15:38:52 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:22.101 15:38:52 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:22.101 15:38:52 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:22.101 15:38:52 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:22.101 15:38:52 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:22.101 15:38:52 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:22.101 15:38:52 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:22.101 15:38:52 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:22.101 15:38:52 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:22.101 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:22.101 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.104 ms 00:20:22.101 00:20:22.101 --- 10.0.0.2 ping statistics --- 00:20:22.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:22.101 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:20:22.101 15:38:52 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:22.101 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:22.101 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.049 ms 00:20:22.101 00:20:22.101 --- 10.0.0.3 ping statistics --- 00:20:22.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:22.101 rtt min/avg/max/mdev = 0.049/0.049/0.049/0.000 ms 00:20:22.101 15:38:52 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:22.101 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:22.101 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.051 ms 00:20:22.101 00:20:22.101 --- 10.0.0.1 ping statistics --- 00:20:22.101 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:22.101 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:20:22.101 15:38:52 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:22.101 15:38:52 -- nvmf/common.sh@422 -- # return 0 00:20:22.101 15:38:52 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:22.101 15:38:52 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:22.101 15:38:52 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:22.101 15:38:52 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:22.101 15:38:52 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:22.101 15:38:52 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:22.101 15:38:52 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:22.101 15:38:52 -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:20:22.101 15:38:52 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:22.101 15:38:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:22.101 15:38:52 -- common/autotest_common.sh@10 -- # set +x 00:20:22.101 15:38:52 -- nvmf/common.sh@470 -- # nvmfpid=67780 00:20:22.101 15:38:52 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:22.101 15:38:52 -- nvmf/common.sh@471 -- # waitforlisten 67780 00:20:22.101 15:38:52 -- common/autotest_common.sh@817 -- # '[' -z 67780 ']' 00:20:22.101 15:38:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:22.101 15:38:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:22.101 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:22.101 15:38:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:22.101 15:38:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:22.101 15:38:52 -- common/autotest_common.sh@10 -- # set +x 00:20:22.101 [2024-04-26 15:38:52.375495] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:20:22.101 [2024-04-26 15:38:52.375574] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:22.361 [2024-04-26 15:38:52.512596] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:22.361 [2024-04-26 15:38:52.638051] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:22.361 [2024-04-26 15:38:52.638100] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:22.361 [2024-04-26 15:38:52.638111] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:22.361 [2024-04-26 15:38:52.638119] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:22.361 [2024-04-26 15:38:52.638126] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:22.361 [2024-04-26 15:38:52.638271] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:22.361 [2024-04-26 15:38:52.638587] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:22.361 [2024-04-26 15:38:52.639206] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:22.361 [2024-04-26 15:38:52.639215] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:20:23.296 15:38:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:23.296 15:38:53 -- common/autotest_common.sh@850 -- # return 0 00:20:23.296 15:38:53 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:23.296 15:38:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:23.296 15:38:53 -- common/autotest_common.sh@10 -- # set +x 00:20:23.296 15:38:53 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:23.296 15:38:53 -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:20:23.296 15:38:53 -- target/invalid.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode20502 00:20:23.554 [2024-04-26 15:38:53.767607] nvmf_rpc.c: 401:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:20:23.554 15:38:53 -- target/invalid.sh@40 -- # out='2024/04/26 15:38:53 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode20502 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:20:23.554 request: 00:20:23.554 { 00:20:23.554 "method": "nvmf_create_subsystem", 00:20:23.554 "params": { 00:20:23.554 "nqn": "nqn.2016-06.io.spdk:cnode20502", 00:20:23.554 "tgt_name": "foobar" 00:20:23.554 } 00:20:23.554 } 00:20:23.554 Got JSON-RPC error response 00:20:23.554 GoRPCClient: error on JSON-RPC call' 00:20:23.554 15:38:53 -- target/invalid.sh@41 -- # [[ 2024/04/26 15:38:53 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode20502 tgt_name:foobar], err: error received for nvmf_create_subsystem method, err: Code=-32603 Msg=Unable to find target foobar 00:20:23.554 request: 00:20:23.554 { 00:20:23.554 "method": "nvmf_create_subsystem", 00:20:23.554 "params": { 00:20:23.554 "nqn": "nqn.2016-06.io.spdk:cnode20502", 00:20:23.554 "tgt_name": "foobar" 00:20:23.554 } 00:20:23.554 } 00:20:23.554 Got JSON-RPC error response 00:20:23.554 GoRPCClient: error on JSON-RPC call == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:20:23.554 15:38:53 -- target/invalid.sh@45 -- # echo -e '\x1f' 00:20:23.554 15:38:53 -- target/invalid.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode6922 00:20:23.812 [2024-04-26 15:38:54.047884] nvmf_rpc.c: 418:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode6922: invalid serial number 'SPDKISFASTANDAWESOME' 00:20:23.812 15:38:54 -- target/invalid.sh@45 -- # out='2024/04/26 15:38:54 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode6922 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:20:23.812 request: 00:20:23.812 { 00:20:23.812 "method": "nvmf_create_subsystem", 00:20:23.812 "params": { 00:20:23.812 "nqn": "nqn.2016-06.io.spdk:cnode6922", 00:20:23.812 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:20:23.812 } 00:20:23.812 } 00:20:23.812 Got JSON-RPC error response 00:20:23.812 GoRPCClient: error on JSON-RPC call' 00:20:23.812 15:38:54 -- target/invalid.sh@46 -- # [[ 2024/04/26 15:38:54 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[nqn:nqn.2016-06.io.spdk:cnode6922 serial_number:SPDKISFASTANDAWESOME], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid SN SPDKISFASTANDAWESOME 00:20:23.812 request: 00:20:23.812 { 00:20:23.812 "method": "nvmf_create_subsystem", 00:20:23.812 "params": { 00:20:23.812 "nqn": "nqn.2016-06.io.spdk:cnode6922", 00:20:23.812 "serial_number": "SPDKISFASTANDAWESOME\u001f" 00:20:23.812 } 00:20:23.812 } 00:20:23.812 Got JSON-RPC error response 00:20:23.812 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \S\N* ]] 00:20:23.812 15:38:54 -- target/invalid.sh@50 -- # echo -e '\x1f' 00:20:23.812 15:38:54 -- target/invalid.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode16030 00:20:24.070 [2024-04-26 15:38:54.304165] nvmf_rpc.c: 427:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode16030: invalid model number 'SPDK_Controller' 00:20:24.070 15:38:54 -- target/invalid.sh@50 -- # out='2024/04/26 15:38:54 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode16030], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:20:24.070 request: 00:20:24.070 { 00:20:24.070 "method": "nvmf_create_subsystem", 00:20:24.070 "params": { 00:20:24.070 "nqn": "nqn.2016-06.io.spdk:cnode16030", 00:20:24.070 "model_number": "SPDK_Controller\u001f" 00:20:24.070 } 00:20:24.070 } 00:20:24.070 Got JSON-RPC error response 00:20:24.070 GoRPCClient: error on JSON-RPC call' 00:20:24.070 15:38:54 -- target/invalid.sh@51 -- # [[ 2024/04/26 15:38:54 error on JSON-RPC call, method: nvmf_create_subsystem, params: map[model_number:SPDK_Controller nqn:nqn.2016-06.io.spdk:cnode16030], err: error received for nvmf_create_subsystem method, err: Code=-32602 Msg=Invalid MN SPDK_Controller 00:20:24.070 request: 00:20:24.070 { 00:20:24.070 "method": "nvmf_create_subsystem", 00:20:24.070 "params": { 00:20:24.070 "nqn": "nqn.2016-06.io.spdk:cnode16030", 00:20:24.070 "model_number": "SPDK_Controller\u001f" 00:20:24.070 } 00:20:24.070 } 00:20:24.070 Got JSON-RPC error response 00:20:24.070 GoRPCClient: error on JSON-RPC call == *\I\n\v\a\l\i\d\ \M\N* ]] 00:20:24.070 15:38:54 -- target/invalid.sh@54 -- # gen_random_s 21 00:20:24.070 15:38:54 -- target/invalid.sh@19 -- # local length=21 ll 00:20:24.070 15:38:54 -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:20:24.070 15:38:54 -- target/invalid.sh@21 -- # local chars 00:20:24.070 15:38:54 -- target/invalid.sh@22 -- # local string 00:20:24.070 15:38:54 -- target/invalid.sh@24 -- # (( ll = 0 )) 00:20:24.070 15:38:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:20:24.070 15:38:54 -- target/invalid.sh@25 -- # printf %x 36 00:20:24.070 15:38:54 -- target/invalid.sh@25 -- # echo -e '\x24' 00:20:24.070 15:38:54 -- target/invalid.sh@25 -- # string+='$' 00:20:24.070 15:38:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:20:24.070 15:38:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:20:24.070 15:38:54 -- target/invalid.sh@25 -- # printf %x 109 00:20:24.070 15:38:54 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:20:24.070 15:38:54 -- target/invalid.sh@25 -- # string+=m 00:20:24.070 15:38:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:20:24.070 15:38:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:20:24.070 15:38:54 -- target/invalid.sh@25 -- # printf %x 75 00:20:24.070 15:38:54 -- target/invalid.sh@25 -- # echo -e '\x4b' 00:20:24.070 15:38:54 -- target/invalid.sh@25 -- # string+=K 00:20:24.070 15:38:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:20:24.070 15:38:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:20:24.070 15:38:54 -- target/invalid.sh@25 -- # printf %x 60 00:20:24.070 15:38:54 -- target/invalid.sh@25 -- # echo -e '\x3c' 00:20:24.070 15:38:54 -- target/invalid.sh@25 -- # string+='<' 00:20:24.070 15:38:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:20:24.070 15:38:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:20:24.070 15:38:54 -- target/invalid.sh@25 -- # printf %x 71 00:20:24.070 15:38:54 -- target/invalid.sh@25 -- # echo -e '\x47' 00:20:24.070 15:38:54 -- target/invalid.sh@25 -- # string+=G 00:20:24.070 15:38:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:20:24.070 15:38:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:20:24.070 15:38:54 -- target/invalid.sh@25 -- # printf %x 51 00:20:24.070 15:38:54 -- target/invalid.sh@25 -- # echo -e '\x33' 00:20:24.070 15:38:54 -- target/invalid.sh@25 -- # string+=3 00:20:24.070 15:38:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:20:24.070 15:38:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:20:24.070 15:38:54 -- target/invalid.sh@25 -- # printf %x 107 00:20:24.070 15:38:54 -- target/invalid.sh@25 -- # echo -e '\x6b' 00:20:24.070 15:38:54 -- target/invalid.sh@25 -- # string+=k 00:20:24.070 15:38:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:20:24.070 15:38:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:20:24.070 15:38:54 -- target/invalid.sh@25 -- # printf %x 110 00:20:24.070 15:38:54 -- target/invalid.sh@25 -- # echo -e '\x6e' 00:20:24.070 15:38:54 -- target/invalid.sh@25 -- # string+=n 00:20:24.070 15:38:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:20:24.070 15:38:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:20:24.329 15:38:54 -- target/invalid.sh@25 -- # printf %x 35 00:20:24.329 15:38:54 -- target/invalid.sh@25 -- # echo -e '\x23' 00:20:24.329 15:38:54 -- target/invalid.sh@25 -- # string+='#' 00:20:24.329 15:38:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:20:24.329 15:38:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:20:24.329 15:38:54 -- target/invalid.sh@25 -- # printf %x 86 00:20:24.329 15:38:54 -- target/invalid.sh@25 -- # echo -e '\x56' 00:20:24.329 15:38:54 -- target/invalid.sh@25 -- # string+=V 00:20:24.329 15:38:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:20:24.329 15:38:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:20:24.329 15:38:54 -- target/invalid.sh@25 -- # printf %x 106 00:20:24.329 15:38:54 -- target/invalid.sh@25 -- # echo -e '\x6a' 00:20:24.329 15:38:54 -- target/invalid.sh@25 -- # string+=j 00:20:24.329 15:38:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:20:24.329 15:38:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:20:24.329 15:38:54 -- target/invalid.sh@25 -- # printf %x 50 00:20:24.329 15:38:54 -- target/invalid.sh@25 -- # echo -e '\x32' 00:20:24.329 15:38:54 -- target/invalid.sh@25 -- # string+=2 00:20:24.329 15:38:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:20:24.329 15:38:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:20:24.329 15:38:54 -- target/invalid.sh@25 -- # printf %x 96 00:20:24.329 15:38:54 -- target/invalid.sh@25 -- # echo -e '\x60' 00:20:24.329 15:38:54 -- target/invalid.sh@25 -- # string+='`' 00:20:24.329 15:38:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:20:24.329 15:38:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:20:24.329 15:38:54 -- target/invalid.sh@25 -- # printf %x 41 00:20:24.329 15:38:54 -- target/invalid.sh@25 -- # echo -e '\x29' 00:20:24.329 15:38:54 -- target/invalid.sh@25 -- # string+=')' 00:20:24.329 15:38:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:20:24.329 15:38:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:20:24.329 15:38:54 -- target/invalid.sh@25 -- # printf %x 68 00:20:24.329 15:38:54 -- target/invalid.sh@25 -- # echo -e '\x44' 00:20:24.329 15:38:54 -- target/invalid.sh@25 -- # string+=D 00:20:24.329 15:38:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:20:24.329 15:38:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:20:24.329 15:38:54 -- target/invalid.sh@25 -- # printf %x 42 00:20:24.329 15:38:54 -- target/invalid.sh@25 -- # echo -e '\x2a' 00:20:24.329 15:38:54 -- target/invalid.sh@25 -- # string+='*' 00:20:24.329 15:38:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:20:24.329 15:38:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:20:24.329 15:38:54 -- target/invalid.sh@25 -- # printf %x 86 00:20:24.329 15:38:54 -- target/invalid.sh@25 -- # echo -e '\x56' 00:20:24.329 15:38:54 -- target/invalid.sh@25 -- # string+=V 00:20:24.329 15:38:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:20:24.329 15:38:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:20:24.329 15:38:54 -- target/invalid.sh@25 -- # printf %x 118 00:20:24.329 15:38:54 -- target/invalid.sh@25 -- # echo -e '\x76' 00:20:24.329 15:38:54 -- target/invalid.sh@25 -- # string+=v 00:20:24.329 15:38:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:20:24.329 15:38:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:20:24.329 15:38:54 -- target/invalid.sh@25 -- # printf %x 71 00:20:24.329 15:38:54 -- target/invalid.sh@25 -- # echo -e '\x47' 00:20:24.329 15:38:54 -- target/invalid.sh@25 -- # string+=G 00:20:24.329 15:38:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:20:24.329 15:38:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:20:24.329 15:38:54 -- target/invalid.sh@25 -- # printf %x 67 00:20:24.329 15:38:54 -- target/invalid.sh@25 -- # echo -e '\x43' 00:20:24.329 15:38:54 -- target/invalid.sh@25 -- # string+=C 00:20:24.329 15:38:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:20:24.329 15:38:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:20:24.329 15:38:54 -- target/invalid.sh@25 -- # printf %x 109 00:20:24.329 15:38:54 -- target/invalid.sh@25 -- # echo -e '\x6d' 00:20:24.329 15:38:54 -- target/invalid.sh@25 -- # string+=m 00:20:24.329 15:38:54 -- target/invalid.sh@24 -- # (( ll++ )) 00:20:24.329 15:38:54 -- target/invalid.sh@24 -- # (( ll < length )) 00:20:24.329 15:38:54 -- target/invalid.sh@28 -- # [[ $ == \- ]] 00:20:24.329 15:38:54 -- target/invalid.sh@31 -- # echo '$mK /dev/null' 00:20:27.689 15:38:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:27.689 15:38:57 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:27.689 00:20:27.689 real 0m5.926s 00:20:27.689 user 0m23.662s 00:20:27.689 sys 0m1.272s 00:20:27.689 15:38:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:27.689 15:38:57 -- common/autotest_common.sh@10 -- # set +x 00:20:27.689 ************************************ 00:20:27.689 END TEST nvmf_invalid 00:20:27.689 ************************************ 00:20:27.689 15:38:57 -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:20:27.689 15:38:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:27.689 15:38:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:27.689 15:38:57 -- common/autotest_common.sh@10 -- # set +x 00:20:27.689 ************************************ 00:20:27.689 START TEST nvmf_abort 00:20:27.689 ************************************ 00:20:27.689 15:38:57 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort.sh --transport=tcp 00:20:27.948 * Looking for test storage... 00:20:27.948 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:27.948 15:38:57 -- target/abort.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:27.948 15:38:57 -- nvmf/common.sh@7 -- # uname -s 00:20:27.948 15:38:58 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:27.948 15:38:58 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:27.948 15:38:58 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:27.948 15:38:58 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:27.948 15:38:58 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:27.948 15:38:58 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:27.948 15:38:58 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:27.948 15:38:58 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:27.948 15:38:58 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:27.948 15:38:58 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:27.948 15:38:58 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:20:27.948 15:38:58 -- nvmf/common.sh@18 -- # NVME_HOSTID=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:20:27.948 15:38:58 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:27.948 15:38:58 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:27.948 15:38:58 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:27.948 15:38:58 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:27.948 15:38:58 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:27.948 15:38:58 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:27.948 15:38:58 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:27.948 15:38:58 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:27.948 15:38:58 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.948 15:38:58 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.948 15:38:58 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.948 15:38:58 -- paths/export.sh@5 -- # export PATH 00:20:27.948 15:38:58 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:27.948 15:38:58 -- nvmf/common.sh@47 -- # : 0 00:20:27.948 15:38:58 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:27.948 15:38:58 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:27.949 15:38:58 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:27.949 15:38:58 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:27.949 15:38:58 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:27.949 15:38:58 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:27.949 15:38:58 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:27.949 15:38:58 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:27.949 15:38:58 -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:27.949 15:38:58 -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:20:27.949 15:38:58 -- target/abort.sh@14 -- # nvmftestinit 00:20:27.949 15:38:58 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:27.949 15:38:58 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:27.949 15:38:58 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:27.949 15:38:58 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:27.949 15:38:58 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:27.949 15:38:58 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:27.949 15:38:58 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:27.949 15:38:58 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:27.949 15:38:58 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:20:27.949 15:38:58 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:20:27.949 15:38:58 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:20:27.949 15:38:58 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:20:27.949 15:38:58 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:20:27.949 15:38:58 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:20:27.949 15:38:58 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:27.949 15:38:58 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:27.949 15:38:58 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:27.949 15:38:58 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:27.949 15:38:58 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:27.949 15:38:58 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:27.949 15:38:58 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:27.949 15:38:58 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:27.949 15:38:58 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:27.949 15:38:58 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:27.949 15:38:58 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:27.949 15:38:58 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:27.949 15:38:58 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:27.949 15:38:58 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:27.949 Cannot find device "nvmf_tgt_br" 00:20:27.949 15:38:58 -- nvmf/common.sh@155 -- # true 00:20:27.949 15:38:58 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:27.949 Cannot find device "nvmf_tgt_br2" 00:20:27.949 15:38:58 -- nvmf/common.sh@156 -- # true 00:20:27.949 15:38:58 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:27.949 15:38:58 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:27.949 Cannot find device "nvmf_tgt_br" 00:20:27.949 15:38:58 -- nvmf/common.sh@158 -- # true 00:20:27.949 15:38:58 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:27.949 Cannot find device "nvmf_tgt_br2" 00:20:27.949 15:38:58 -- nvmf/common.sh@159 -- # true 00:20:27.949 15:38:58 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:27.949 15:38:58 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:27.949 15:38:58 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:27.949 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:27.949 15:38:58 -- nvmf/common.sh@162 -- # true 00:20:27.949 15:38:58 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:27.949 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:27.949 15:38:58 -- nvmf/common.sh@163 -- # true 00:20:27.949 15:38:58 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:27.949 15:38:58 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:27.949 15:38:58 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:27.949 15:38:58 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:27.949 15:38:58 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:27.949 15:38:58 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:27.949 15:38:58 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:27.949 15:38:58 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:27.949 15:38:58 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:28.208 15:38:58 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:28.208 15:38:58 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:28.208 15:38:58 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:28.208 15:38:58 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:28.208 15:38:58 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:28.208 15:38:58 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:28.208 15:38:58 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:28.208 15:38:58 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:28.208 15:38:58 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:28.208 15:38:58 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:28.208 15:38:58 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:28.208 15:38:58 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:28.208 15:38:58 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:28.208 15:38:58 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:28.208 15:38:58 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:28.208 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:28.208 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:20:28.208 00:20:28.208 --- 10.0.0.2 ping statistics --- 00:20:28.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:28.208 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:20:28.208 15:38:58 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:28.208 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:28.208 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.060 ms 00:20:28.208 00:20:28.208 --- 10.0.0.3 ping statistics --- 00:20:28.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:28.208 rtt min/avg/max/mdev = 0.060/0.060/0.060/0.000 ms 00:20:28.208 15:38:58 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:28.208 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:28.208 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.031 ms 00:20:28.208 00:20:28.208 --- 10.0.0.1 ping statistics --- 00:20:28.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:28.208 rtt min/avg/max/mdev = 0.031/0.031/0.031/0.000 ms 00:20:28.208 15:38:58 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:28.208 15:38:58 -- nvmf/common.sh@422 -- # return 0 00:20:28.208 15:38:58 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:28.208 15:38:58 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:28.208 15:38:58 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:28.208 15:38:58 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:28.208 15:38:58 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:28.208 15:38:58 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:28.208 15:38:58 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:28.208 15:38:58 -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:20:28.208 15:38:58 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:28.208 15:38:58 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:28.208 15:38:58 -- common/autotest_common.sh@10 -- # set +x 00:20:28.208 15:38:58 -- nvmf/common.sh@470 -- # nvmfpid=68297 00:20:28.208 15:38:58 -- nvmf/common.sh@471 -- # waitforlisten 68297 00:20:28.208 15:38:58 -- common/autotest_common.sh@817 -- # '[' -z 68297 ']' 00:20:28.208 15:38:58 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:28.208 15:38:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:28.208 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:28.208 15:38:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:28.208 15:38:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:28.208 15:38:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:28.208 15:38:58 -- common/autotest_common.sh@10 -- # set +x 00:20:28.208 [2024-04-26 15:38:58.439866] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:20:28.208 [2024-04-26 15:38:58.439986] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:28.466 [2024-04-26 15:38:58.579246] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:28.466 [2024-04-26 15:38:58.702387] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:28.466 [2024-04-26 15:38:58.702657] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:28.466 [2024-04-26 15:38:58.702818] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:28.466 [2024-04-26 15:38:58.702950] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:28.466 [2024-04-26 15:38:58.702987] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:28.466 [2024-04-26 15:38:58.703266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:28.466 [2024-04-26 15:38:58.703363] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:28.466 [2024-04-26 15:38:58.703367] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:29.439 15:38:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:29.439 15:38:59 -- common/autotest_common.sh@850 -- # return 0 00:20:29.439 15:38:59 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:29.439 15:38:59 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:29.439 15:38:59 -- common/autotest_common.sh@10 -- # set +x 00:20:29.439 15:38:59 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:29.439 15:38:59 -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:20:29.439 15:38:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:29.439 15:38:59 -- common/autotest_common.sh@10 -- # set +x 00:20:29.439 [2024-04-26 15:38:59.460731] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:29.439 15:38:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:29.439 15:38:59 -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:20:29.439 15:38:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:29.439 15:38:59 -- common/autotest_common.sh@10 -- # set +x 00:20:29.439 Malloc0 00:20:29.439 15:38:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:29.439 15:38:59 -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:20:29.439 15:38:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:29.439 15:38:59 -- common/autotest_common.sh@10 -- # set +x 00:20:29.439 Delay0 00:20:29.439 15:38:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:29.439 15:38:59 -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:20:29.439 15:38:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:29.439 15:38:59 -- common/autotest_common.sh@10 -- # set +x 00:20:29.439 15:38:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:29.439 15:38:59 -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:20:29.439 15:38:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:29.439 15:38:59 -- common/autotest_common.sh@10 -- # set +x 00:20:29.439 15:38:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:29.439 15:38:59 -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:29.439 15:38:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:29.439 15:38:59 -- common/autotest_common.sh@10 -- # set +x 00:20:29.439 [2024-04-26 15:38:59.537901] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:29.439 15:38:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:29.439 15:38:59 -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:29.439 15:38:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:29.439 15:38:59 -- common/autotest_common.sh@10 -- # set +x 00:20:29.439 15:38:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:29.439 15:38:59 -- target/abort.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:20:29.439 [2024-04-26 15:38:59.717590] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:20:31.969 Initializing NVMe Controllers 00:20:31.969 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:20:31.969 controller IO queue size 128 less than required 00:20:31.969 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:20:31.969 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:20:31.969 Initialization complete. Launching workers. 00:20:31.969 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 35098 00:20:31.969 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 35159, failed to submit 62 00:20:31.969 success 35102, unsuccess 57, failed 0 00:20:31.969 15:39:01 -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:20:31.969 15:39:01 -- common/autotest_common.sh@549 -- # xtrace_disable 00:20:31.969 15:39:01 -- common/autotest_common.sh@10 -- # set +x 00:20:31.969 15:39:01 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:20:31.969 15:39:01 -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:20:31.969 15:39:01 -- target/abort.sh@38 -- # nvmftestfini 00:20:31.969 15:39:01 -- nvmf/common.sh@477 -- # nvmfcleanup 00:20:31.969 15:39:01 -- nvmf/common.sh@117 -- # sync 00:20:31.969 15:39:01 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:31.969 15:39:01 -- nvmf/common.sh@120 -- # set +e 00:20:31.969 15:39:01 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:31.969 15:39:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:31.969 rmmod nvme_tcp 00:20:31.969 rmmod nvme_fabrics 00:20:31.969 rmmod nvme_keyring 00:20:31.969 15:39:01 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:31.969 15:39:01 -- nvmf/common.sh@124 -- # set -e 00:20:31.969 15:39:01 -- nvmf/common.sh@125 -- # return 0 00:20:31.969 15:39:01 -- nvmf/common.sh@478 -- # '[' -n 68297 ']' 00:20:31.969 15:39:01 -- nvmf/common.sh@479 -- # killprocess 68297 00:20:31.969 15:39:01 -- common/autotest_common.sh@936 -- # '[' -z 68297 ']' 00:20:31.969 15:39:01 -- common/autotest_common.sh@940 -- # kill -0 68297 00:20:31.969 15:39:01 -- common/autotest_common.sh@941 -- # uname 00:20:31.969 15:39:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:20:31.969 15:39:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68297 00:20:31.969 killing process with pid 68297 00:20:31.969 15:39:01 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:20:31.969 15:39:01 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:20:31.969 15:39:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68297' 00:20:31.969 15:39:01 -- common/autotest_common.sh@955 -- # kill 68297 00:20:31.969 15:39:01 -- common/autotest_common.sh@960 -- # wait 68297 00:20:31.969 15:39:02 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:20:31.969 15:39:02 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:20:31.969 15:39:02 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:20:31.969 15:39:02 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:31.969 15:39:02 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:31.969 15:39:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:31.969 15:39:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:31.969 15:39:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:31.969 15:39:02 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:20:31.969 00:20:31.969 real 0m4.276s 00:20:31.969 user 0m12.120s 00:20:31.969 sys 0m1.040s 00:20:31.969 15:39:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:20:31.969 ************************************ 00:20:31.969 END TEST nvmf_abort 00:20:31.969 ************************************ 00:20:31.969 15:39:02 -- common/autotest_common.sh@10 -- # set +x 00:20:31.969 15:39:02 -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:20:31.969 15:39:02 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:20:31.969 15:39:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:20:31.969 15:39:02 -- common/autotest_common.sh@10 -- # set +x 00:20:32.227 ************************************ 00:20:32.227 START TEST nvmf_ns_hotplug_stress 00:20:32.227 ************************************ 00:20:32.227 15:39:02 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:20:32.227 * Looking for test storage... 00:20:32.227 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:20:32.227 15:39:02 -- target/ns_hotplug_stress.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:20:32.227 15:39:02 -- nvmf/common.sh@7 -- # uname -s 00:20:32.227 15:39:02 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:32.227 15:39:02 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:32.227 15:39:02 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:32.227 15:39:02 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:32.227 15:39:02 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:32.227 15:39:02 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:32.227 15:39:02 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:32.227 15:39:02 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:32.227 15:39:02 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:32.227 15:39:02 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:32.227 15:39:02 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:20:32.227 15:39:02 -- nvmf/common.sh@18 -- # NVME_HOSTID=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:20:32.227 15:39:02 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:32.227 15:39:02 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:32.227 15:39:02 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:20:32.227 15:39:02 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:32.227 15:39:02 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:20:32.227 15:39:02 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:32.227 15:39:02 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:32.227 15:39:02 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:32.228 15:39:02 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.228 15:39:02 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.228 15:39:02 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.228 15:39:02 -- paths/export.sh@5 -- # export PATH 00:20:32.228 15:39:02 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:32.228 15:39:02 -- nvmf/common.sh@47 -- # : 0 00:20:32.228 15:39:02 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:32.228 15:39:02 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:32.228 15:39:02 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:32.228 15:39:02 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:32.228 15:39:02 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:32.228 15:39:02 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:32.228 15:39:02 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:32.228 15:39:02 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:32.228 15:39:02 -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:32.228 15:39:02 -- target/ns_hotplug_stress.sh@13 -- # nvmftestinit 00:20:32.228 15:39:02 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:20:32.228 15:39:02 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:32.228 15:39:02 -- nvmf/common.sh@437 -- # prepare_net_devs 00:20:32.228 15:39:02 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:20:32.228 15:39:02 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:20:32.228 15:39:02 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:32.228 15:39:02 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:20:32.228 15:39:02 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:32.228 15:39:02 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:20:32.228 15:39:02 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:20:32.228 15:39:02 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:20:32.228 15:39:02 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:20:32.228 15:39:02 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:20:32.228 15:39:02 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:20:32.228 15:39:02 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:32.228 15:39:02 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:32.228 15:39:02 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:20:32.228 15:39:02 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:20:32.228 15:39:02 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:20:32.228 15:39:02 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:20:32.228 15:39:02 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:20:32.228 15:39:02 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:32.228 15:39:02 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:20:32.228 15:39:02 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:20:32.228 15:39:02 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:20:32.228 15:39:02 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:20:32.228 15:39:02 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:20:32.228 15:39:02 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:20:32.228 Cannot find device "nvmf_tgt_br" 00:20:32.228 15:39:02 -- nvmf/common.sh@155 -- # true 00:20:32.228 15:39:02 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:20:32.228 Cannot find device "nvmf_tgt_br2" 00:20:32.228 15:39:02 -- nvmf/common.sh@156 -- # true 00:20:32.228 15:39:02 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:20:32.228 15:39:02 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:20:32.228 Cannot find device "nvmf_tgt_br" 00:20:32.228 15:39:02 -- nvmf/common.sh@158 -- # true 00:20:32.228 15:39:02 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:20:32.228 Cannot find device "nvmf_tgt_br2" 00:20:32.228 15:39:02 -- nvmf/common.sh@159 -- # true 00:20:32.228 15:39:02 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:20:32.486 15:39:02 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:20:32.486 15:39:02 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:20:32.486 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:32.486 15:39:02 -- nvmf/common.sh@162 -- # true 00:20:32.486 15:39:02 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:20:32.486 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:20:32.486 15:39:02 -- nvmf/common.sh@163 -- # true 00:20:32.486 15:39:02 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:20:32.486 15:39:02 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:20:32.486 15:39:02 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:20:32.486 15:39:02 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:20:32.486 15:39:02 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:20:32.486 15:39:02 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:20:32.486 15:39:02 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:20:32.486 15:39:02 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:20:32.486 15:39:02 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:20:32.486 15:39:02 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:20:32.486 15:39:02 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:20:32.486 15:39:02 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:20:32.486 15:39:02 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:20:32.486 15:39:02 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:20:32.486 15:39:02 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:20:32.486 15:39:02 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:20:32.486 15:39:02 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:20:32.486 15:39:02 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:20:32.486 15:39:02 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:20:32.486 15:39:02 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:20:32.486 15:39:02 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:20:32.486 15:39:02 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:20:32.486 15:39:02 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:20:32.486 15:39:02 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:20:32.486 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:32.486 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.063 ms 00:20:32.486 00:20:32.486 --- 10.0.0.2 ping statistics --- 00:20:32.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:32.486 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:20:32.486 15:39:02 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:20:32.486 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:20:32.486 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:20:32.486 00:20:32.486 --- 10.0.0.3 ping statistics --- 00:20:32.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:32.486 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:20:32.486 15:39:02 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:20:32.486 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:32.486 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:20:32.486 00:20:32.487 --- 10.0.0.1 ping statistics --- 00:20:32.487 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:32.487 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:20:32.487 15:39:02 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:32.487 15:39:02 -- nvmf/common.sh@422 -- # return 0 00:20:32.487 15:39:02 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:20:32.487 15:39:02 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:32.487 15:39:02 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:20:32.487 15:39:02 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:20:32.487 15:39:02 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:32.487 15:39:02 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:20:32.487 15:39:02 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:20:32.487 15:39:02 -- target/ns_hotplug_stress.sh@14 -- # nvmfappstart -m 0xE 00:20:32.487 15:39:02 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:20:32.487 15:39:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:20:32.487 15:39:02 -- common/autotest_common.sh@10 -- # set +x 00:20:32.487 15:39:02 -- nvmf/common.sh@470 -- # nvmfpid=68560 00:20:32.487 15:39:02 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:32.487 15:39:02 -- nvmf/common.sh@471 -- # waitforlisten 68560 00:20:32.487 15:39:02 -- common/autotest_common.sh@817 -- # '[' -z 68560 ']' 00:20:32.487 15:39:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:32.487 15:39:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:20:32.487 15:39:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:32.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:32.487 15:39:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:20:32.487 15:39:02 -- common/autotest_common.sh@10 -- # set +x 00:20:32.745 [2024-04-26 15:39:02.829847] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:20:32.745 [2024-04-26 15:39:02.829961] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:32.745 [2024-04-26 15:39:02.973315] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:33.003 [2024-04-26 15:39:03.107893] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:33.003 [2024-04-26 15:39:03.107971] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:33.003 [2024-04-26 15:39:03.107986] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:33.003 [2024-04-26 15:39:03.107996] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:33.003 [2024-04-26 15:39:03.108005] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:33.003 [2024-04-26 15:39:03.108194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:20:33.003 [2024-04-26 15:39:03.108950] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:20:33.003 [2024-04-26 15:39:03.108964] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:20:33.582 15:39:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:20:33.582 15:39:03 -- common/autotest_common.sh@850 -- # return 0 00:20:33.582 15:39:03 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:20:33.582 15:39:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:20:33.582 15:39:03 -- common/autotest_common.sh@10 -- # set +x 00:20:33.840 15:39:03 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:33.840 15:39:03 -- target/ns_hotplug_stress.sh@16 -- # null_size=1000 00:20:33.840 15:39:03 -- target/ns_hotplug_stress.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:34.100 [2024-04-26 15:39:04.145957] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:34.100 15:39:04 -- target/ns_hotplug_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:20:34.358 15:39:04 -- target/ns_hotplug_stress.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:34.616 [2024-04-26 15:39:04.664267] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:34.616 15:39:04 -- target/ns_hotplug_stress.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:34.875 15:39:04 -- target/ns_hotplug_stress.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:20:35.175 Malloc0 00:20:35.175 15:39:05 -- target/ns_hotplug_stress.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:20:35.433 Delay0 00:20:35.433 15:39:05 -- target/ns_hotplug_stress.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:35.690 15:39:05 -- target/ns_hotplug_stress.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:20:35.690 NULL1 00:20:35.948 15:39:05 -- target/ns_hotplug_stress.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:20:35.948 15:39:06 -- target/ns_hotplug_stress.sh@33 -- # PERF_PID=68697 00:20:35.948 15:39:06 -- target/ns_hotplug_stress.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:20:35.948 15:39:06 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68697 00:20:35.948 15:39:06 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:37.321 Read completed with error (sct=0, sc=11) 00:20:37.321 15:39:07 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:37.321 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:20:37.321 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:20:37.321 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:20:37.321 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:20:37.321 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:20:37.580 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:20:37.580 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:20:37.580 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:20:37.580 15:39:07 -- target/ns_hotplug_stress.sh@40 -- # null_size=1001 00:20:37.580 15:39:07 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:20:37.839 true 00:20:37.839 15:39:08 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68697 00:20:37.839 15:39:08 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:38.774 15:39:08 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:38.774 15:39:09 -- target/ns_hotplug_stress.sh@40 -- # null_size=1002 00:20:38.774 15:39:09 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:20:39.032 true 00:20:39.032 15:39:09 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68697 00:20:39.032 15:39:09 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:39.291 15:39:09 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:39.548 15:39:09 -- target/ns_hotplug_stress.sh@40 -- # null_size=1003 00:20:39.548 15:39:09 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:20:39.806 true 00:20:39.806 15:39:10 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68697 00:20:39.806 15:39:10 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:40.739 15:39:10 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:41.066 15:39:11 -- target/ns_hotplug_stress.sh@40 -- # null_size=1004 00:20:41.066 15:39:11 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:20:41.066 true 00:20:41.066 15:39:11 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68697 00:20:41.066 15:39:11 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:41.324 15:39:11 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:41.582 15:39:11 -- target/ns_hotplug_stress.sh@40 -- # null_size=1005 00:20:41.582 15:39:11 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:20:41.840 true 00:20:41.840 15:39:12 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68697 00:20:41.840 15:39:12 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:42.099 15:39:12 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:42.357 15:39:12 -- target/ns_hotplug_stress.sh@40 -- # null_size=1006 00:20:42.357 15:39:12 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:20:42.615 true 00:20:42.615 15:39:12 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68697 00:20:42.615 15:39:12 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:43.547 15:39:13 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:43.805 15:39:14 -- target/ns_hotplug_stress.sh@40 -- # null_size=1007 00:20:43.805 15:39:14 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:20:44.063 true 00:20:44.063 15:39:14 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68697 00:20:44.063 15:39:14 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:44.321 15:39:14 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:44.579 15:39:14 -- target/ns_hotplug_stress.sh@40 -- # null_size=1008 00:20:44.579 15:39:14 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:20:44.837 true 00:20:44.837 15:39:14 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68697 00:20:44.837 15:39:14 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:45.801 15:39:15 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:45.801 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:20:45.801 15:39:16 -- target/ns_hotplug_stress.sh@40 -- # null_size=1009 00:20:45.801 15:39:16 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:20:46.058 true 00:20:46.058 15:39:16 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68697 00:20:46.058 15:39:16 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:46.316 15:39:16 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:46.574 15:39:16 -- target/ns_hotplug_stress.sh@40 -- # null_size=1010 00:20:46.574 15:39:16 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:20:46.832 true 00:20:46.832 15:39:16 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68697 00:20:46.832 15:39:16 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:47.766 15:39:17 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:48.025 15:39:18 -- target/ns_hotplug_stress.sh@40 -- # null_size=1011 00:20:48.025 15:39:18 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:20:48.025 true 00:20:48.283 15:39:18 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68697 00:20:48.283 15:39:18 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:48.283 15:39:18 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:48.541 15:39:18 -- target/ns_hotplug_stress.sh@40 -- # null_size=1012 00:20:48.541 15:39:18 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:20:48.799 true 00:20:48.799 15:39:19 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68697 00:20:48.799 15:39:19 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:49.732 15:39:19 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:49.991 15:39:20 -- target/ns_hotplug_stress.sh@40 -- # null_size=1013 00:20:49.991 15:39:20 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:20:50.248 true 00:20:50.248 15:39:20 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68697 00:20:50.248 15:39:20 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:50.506 15:39:20 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:50.764 15:39:20 -- target/ns_hotplug_stress.sh@40 -- # null_size=1014 00:20:50.764 15:39:20 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:20:51.021 true 00:20:51.021 15:39:21 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68697 00:20:51.021 15:39:21 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:51.279 15:39:21 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:51.536 15:39:21 -- target/ns_hotplug_stress.sh@40 -- # null_size=1015 00:20:51.537 15:39:21 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:20:51.794 true 00:20:51.794 15:39:21 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68697 00:20:51.794 15:39:21 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:52.729 15:39:22 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:52.987 15:39:23 -- target/ns_hotplug_stress.sh@40 -- # null_size=1016 00:20:52.987 15:39:23 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:20:53.244 true 00:20:53.244 15:39:23 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68697 00:20:53.244 15:39:23 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:53.502 15:39:23 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:53.759 15:39:23 -- target/ns_hotplug_stress.sh@40 -- # null_size=1017 00:20:53.759 15:39:23 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:20:54.018 true 00:20:54.018 15:39:24 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68697 00:20:54.018 15:39:24 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:54.276 15:39:24 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:54.534 15:39:24 -- target/ns_hotplug_stress.sh@40 -- # null_size=1018 00:20:54.534 15:39:24 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:20:54.792 true 00:20:54.792 15:39:24 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68697 00:20:54.792 15:39:24 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:55.727 15:39:25 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:55.985 15:39:26 -- target/ns_hotplug_stress.sh@40 -- # null_size=1019 00:20:55.985 15:39:26 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:20:56.243 true 00:20:56.243 15:39:26 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68697 00:20:56.243 15:39:26 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:56.501 15:39:26 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:56.759 15:39:26 -- target/ns_hotplug_stress.sh@40 -- # null_size=1020 00:20:56.759 15:39:26 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:20:57.016 true 00:20:57.016 15:39:27 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68697 00:20:57.016 15:39:27 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:57.275 15:39:27 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:57.533 15:39:27 -- target/ns_hotplug_stress.sh@40 -- # null_size=1021 00:20:57.533 15:39:27 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:20:57.790 true 00:20:57.790 15:39:27 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68697 00:20:57.790 15:39:27 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:58.723 15:39:28 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:58.981 15:39:29 -- target/ns_hotplug_stress.sh@40 -- # null_size=1022 00:20:58.981 15:39:29 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:20:59.240 true 00:20:59.240 15:39:29 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68697 00:20:59.240 15:39:29 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:20:59.499 15:39:29 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:20:59.757 15:39:29 -- target/ns_hotplug_stress.sh@40 -- # null_size=1023 00:20:59.757 15:39:29 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:21:00.014 true 00:21:00.014 15:39:30 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68697 00:21:00.014 15:39:30 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:00.272 15:39:30 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:00.530 15:39:30 -- target/ns_hotplug_stress.sh@40 -- # null_size=1024 00:21:00.530 15:39:30 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:21:00.788 true 00:21:00.788 15:39:30 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68697 00:21:00.788 15:39:30 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:01.721 15:39:31 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:01.979 15:39:32 -- target/ns_hotplug_stress.sh@40 -- # null_size=1025 00:21:01.979 15:39:32 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:21:02.252 true 00:21:02.252 15:39:32 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68697 00:21:02.252 15:39:32 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:02.537 15:39:32 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:02.794 15:39:32 -- target/ns_hotplug_stress.sh@40 -- # null_size=1026 00:21:02.794 15:39:32 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:21:02.794 true 00:21:03.052 15:39:33 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68697 00:21:03.052 15:39:33 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:03.985 15:39:33 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:03.985 15:39:34 -- target/ns_hotplug_stress.sh@40 -- # null_size=1027 00:21:03.985 15:39:34 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:21:04.242 true 00:21:04.242 15:39:34 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68697 00:21:04.242 15:39:34 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:04.500 15:39:34 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:04.758 15:39:34 -- target/ns_hotplug_stress.sh@40 -- # null_size=1028 00:21:04.758 15:39:34 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:21:05.016 true 00:21:05.016 15:39:35 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68697 00:21:05.016 15:39:35 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:05.983 15:39:35 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:05.983 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:21:05.983 15:39:36 -- target/ns_hotplug_stress.sh@40 -- # null_size=1029 00:21:05.983 15:39:36 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:21:06.252 true 00:21:06.252 15:39:36 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68697 00:21:06.252 15:39:36 -- target/ns_hotplug_stress.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:06.252 Initializing NVMe Controllers 00:21:06.252 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:06.252 Controller IO queue size 128, less than required. 00:21:06.252 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:06.252 Controller IO queue size 128, less than required. 00:21:06.252 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:06.252 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:06.252 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:21:06.252 Initialization complete. Launching workers. 00:21:06.252 ======================================================== 00:21:06.252 Latency(us) 00:21:06.252 Device Information : IOPS MiB/s Average min max 00:21:06.252 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 392.63 0.19 154307.51 3256.24 1030289.65 00:21:06.252 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 9938.39 4.85 12879.07 2738.80 549796.25 00:21:06.252 ======================================================== 00:21:06.252 Total : 10331.03 5.04 18254.11 2738.80 1030289.65 00:21:06.252 00:21:06.510 15:39:36 -- target/ns_hotplug_stress.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:06.772 15:39:36 -- target/ns_hotplug_stress.sh@40 -- # null_size=1030 00:21:06.773 15:39:36 -- target/ns_hotplug_stress.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:21:06.773 true 00:21:07.031 15:39:37 -- target/ns_hotplug_stress.sh@35 -- # kill -0 68697 00:21:07.031 /home/vagrant/spdk_repo/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 35: kill: (68697) - No such process 00:21:07.031 15:39:37 -- target/ns_hotplug_stress.sh@44 -- # wait 68697 00:21:07.031 15:39:37 -- target/ns_hotplug_stress.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:21:07.031 15:39:37 -- target/ns_hotplug_stress.sh@48 -- # nvmftestfini 00:21:07.031 15:39:37 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:07.031 15:39:37 -- nvmf/common.sh@117 -- # sync 00:21:07.031 15:39:37 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:07.031 15:39:37 -- nvmf/common.sh@120 -- # set +e 00:21:07.031 15:39:37 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:07.031 15:39:37 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:07.031 rmmod nvme_tcp 00:21:07.031 rmmod nvme_fabrics 00:21:07.031 rmmod nvme_keyring 00:21:07.031 15:39:37 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:07.031 15:39:37 -- nvmf/common.sh@124 -- # set -e 00:21:07.031 15:39:37 -- nvmf/common.sh@125 -- # return 0 00:21:07.031 15:39:37 -- nvmf/common.sh@478 -- # '[' -n 68560 ']' 00:21:07.031 15:39:37 -- nvmf/common.sh@479 -- # killprocess 68560 00:21:07.031 15:39:37 -- common/autotest_common.sh@936 -- # '[' -z 68560 ']' 00:21:07.031 15:39:37 -- common/autotest_common.sh@940 -- # kill -0 68560 00:21:07.031 15:39:37 -- common/autotest_common.sh@941 -- # uname 00:21:07.031 15:39:37 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:07.031 15:39:37 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 68560 00:21:07.031 killing process with pid 68560 00:21:07.031 15:39:37 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:07.031 15:39:37 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:07.031 15:39:37 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 68560' 00:21:07.031 15:39:37 -- common/autotest_common.sh@955 -- # kill 68560 00:21:07.031 15:39:37 -- common/autotest_common.sh@960 -- # wait 68560 00:21:07.296 15:39:37 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:07.296 15:39:37 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:07.296 15:39:37 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:07.296 15:39:37 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:07.296 15:39:37 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:07.296 15:39:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:07.296 15:39:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:07.296 15:39:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:07.296 15:39:37 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:07.296 00:21:07.296 real 0m35.166s 00:21:07.296 user 2m29.369s 00:21:07.296 sys 0m7.983s 00:21:07.296 15:39:37 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:07.296 ************************************ 00:21:07.296 END TEST nvmf_ns_hotplug_stress 00:21:07.296 ************************************ 00:21:07.296 15:39:37 -- common/autotest_common.sh@10 -- # set +x 00:21:07.296 15:39:37 -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:21:07.296 15:39:37 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:07.296 15:39:37 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:07.296 15:39:37 -- common/autotest_common.sh@10 -- # set +x 00:21:07.296 ************************************ 00:21:07.296 START TEST nvmf_connect_stress 00:21:07.296 ************************************ 00:21:07.296 15:39:37 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:21:07.556 * Looking for test storage... 00:21:07.556 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:07.556 15:39:37 -- target/connect_stress.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:07.556 15:39:37 -- nvmf/common.sh@7 -- # uname -s 00:21:07.556 15:39:37 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:07.556 15:39:37 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:07.556 15:39:37 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:07.556 15:39:37 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:07.556 15:39:37 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:07.556 15:39:37 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:07.556 15:39:37 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:07.556 15:39:37 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:07.556 15:39:37 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:07.556 15:39:37 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:07.556 15:39:37 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:21:07.556 15:39:37 -- nvmf/common.sh@18 -- # NVME_HOSTID=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:21:07.556 15:39:37 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:07.556 15:39:37 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:07.556 15:39:37 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:07.556 15:39:37 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:07.556 15:39:37 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:07.556 15:39:37 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:07.556 15:39:37 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:07.556 15:39:37 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:07.556 15:39:37 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.556 15:39:37 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.556 15:39:37 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.556 15:39:37 -- paths/export.sh@5 -- # export PATH 00:21:07.556 15:39:37 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:07.556 15:39:37 -- nvmf/common.sh@47 -- # : 0 00:21:07.556 15:39:37 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:07.556 15:39:37 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:07.556 15:39:37 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:07.556 15:39:37 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:07.556 15:39:37 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:07.556 15:39:37 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:07.556 15:39:37 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:07.556 15:39:37 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:07.556 15:39:37 -- target/connect_stress.sh@12 -- # nvmftestinit 00:21:07.556 15:39:37 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:07.556 15:39:37 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:07.556 15:39:37 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:07.556 15:39:37 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:07.556 15:39:37 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:07.556 15:39:37 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:07.556 15:39:37 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:07.556 15:39:37 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:07.556 15:39:37 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:21:07.556 15:39:37 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:21:07.556 15:39:37 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:21:07.556 15:39:37 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:21:07.556 15:39:37 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:21:07.556 15:39:37 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:21:07.556 15:39:37 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:07.556 15:39:37 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:07.556 15:39:37 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:07.556 15:39:37 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:07.556 15:39:37 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:07.556 15:39:37 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:07.556 15:39:37 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:07.556 15:39:37 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:07.556 15:39:37 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:07.556 15:39:37 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:07.556 15:39:37 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:07.556 15:39:37 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:07.556 15:39:37 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:07.556 15:39:37 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:07.556 Cannot find device "nvmf_tgt_br" 00:21:07.556 15:39:37 -- nvmf/common.sh@155 -- # true 00:21:07.556 15:39:37 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:07.556 Cannot find device "nvmf_tgt_br2" 00:21:07.556 15:39:37 -- nvmf/common.sh@156 -- # true 00:21:07.556 15:39:37 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:07.556 15:39:37 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:07.556 Cannot find device "nvmf_tgt_br" 00:21:07.556 15:39:37 -- nvmf/common.sh@158 -- # true 00:21:07.556 15:39:37 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:07.556 Cannot find device "nvmf_tgt_br2" 00:21:07.556 15:39:37 -- nvmf/common.sh@159 -- # true 00:21:07.556 15:39:37 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:07.556 15:39:37 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:07.556 15:39:37 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:07.556 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:07.556 15:39:37 -- nvmf/common.sh@162 -- # true 00:21:07.556 15:39:37 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:07.556 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:07.556 15:39:37 -- nvmf/common.sh@163 -- # true 00:21:07.556 15:39:37 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:07.557 15:39:37 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:07.816 15:39:37 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:07.816 15:39:37 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:07.816 15:39:37 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:07.816 15:39:37 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:07.816 15:39:37 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:07.816 15:39:37 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:07.816 15:39:37 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:07.816 15:39:37 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:07.816 15:39:37 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:07.816 15:39:37 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:07.816 15:39:37 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:07.816 15:39:37 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:07.816 15:39:37 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:07.816 15:39:37 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:07.816 15:39:37 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:07.816 15:39:37 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:07.816 15:39:37 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:07.816 15:39:38 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:07.816 15:39:38 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:07.816 15:39:38 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:07.816 15:39:38 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:07.816 15:39:38 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:07.816 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:07.816 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.067 ms 00:21:07.816 00:21:07.816 --- 10.0.0.2 ping statistics --- 00:21:07.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:07.816 rtt min/avg/max/mdev = 0.067/0.067/0.067/0.000 ms 00:21:07.816 15:39:38 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:07.816 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:07.816 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:21:07.816 00:21:07.816 --- 10.0.0.3 ping statistics --- 00:21:07.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:07.816 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:21:07.816 15:39:38 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:07.816 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:07.816 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:21:07.816 00:21:07.816 --- 10.0.0.1 ping statistics --- 00:21:07.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:07.816 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:21:07.816 15:39:38 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:07.816 15:39:38 -- nvmf/common.sh@422 -- # return 0 00:21:07.816 15:39:38 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:07.816 15:39:38 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:07.816 15:39:38 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:07.816 15:39:38 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:07.816 15:39:38 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:07.816 15:39:38 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:07.816 15:39:38 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:07.816 15:39:38 -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:21:07.816 15:39:38 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:07.816 15:39:38 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:07.816 15:39:38 -- common/autotest_common.sh@10 -- # set +x 00:21:07.816 15:39:38 -- nvmf/common.sh@470 -- # nvmfpid=69855 00:21:07.816 15:39:38 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:07.816 15:39:38 -- nvmf/common.sh@471 -- # waitforlisten 69855 00:21:07.816 15:39:38 -- common/autotest_common.sh@817 -- # '[' -z 69855 ']' 00:21:07.816 15:39:38 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:07.816 15:39:38 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:07.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:07.816 15:39:38 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:07.816 15:39:38 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:07.816 15:39:38 -- common/autotest_common.sh@10 -- # set +x 00:21:08.074 [2024-04-26 15:39:38.119821] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:21:08.074 [2024-04-26 15:39:38.119931] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:08.074 [2024-04-26 15:39:38.262628] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:08.332 [2024-04-26 15:39:38.395794] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:08.332 [2024-04-26 15:39:38.395888] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:08.332 [2024-04-26 15:39:38.395916] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:08.332 [2024-04-26 15:39:38.395927] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:08.332 [2024-04-26 15:39:38.395936] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:08.332 [2024-04-26 15:39:38.396104] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:08.332 [2024-04-26 15:39:38.396248] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:08.332 [2024-04-26 15:39:38.396257] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:08.898 15:39:39 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:08.898 15:39:39 -- common/autotest_common.sh@850 -- # return 0 00:21:08.898 15:39:39 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:08.898 15:39:39 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:08.898 15:39:39 -- common/autotest_common.sh@10 -- # set +x 00:21:08.898 15:39:39 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:08.898 15:39:39 -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:08.898 15:39:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:08.898 15:39:39 -- common/autotest_common.sh@10 -- # set +x 00:21:08.898 [2024-04-26 15:39:39.089101] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:08.898 15:39:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:08.898 15:39:39 -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:21:08.898 15:39:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:08.898 15:39:39 -- common/autotest_common.sh@10 -- # set +x 00:21:08.898 15:39:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:08.898 15:39:39 -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:08.898 15:39:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:08.898 15:39:39 -- common/autotest_common.sh@10 -- # set +x 00:21:08.898 [2024-04-26 15:39:39.106845] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:08.898 15:39:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:08.898 15:39:39 -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:21:08.898 15:39:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:08.898 15:39:39 -- common/autotest_common.sh@10 -- # set +x 00:21:08.898 NULL1 00:21:08.898 15:39:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:08.898 15:39:39 -- target/connect_stress.sh@21 -- # PERF_PID=69907 00:21:08.898 15:39:39 -- target/connect_stress.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:21:08.898 15:39:39 -- target/connect_stress.sh@23 -- # rpcs=/home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:21:08.898 15:39:39 -- target/connect_stress.sh@25 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:21:08.898 15:39:39 -- target/connect_stress.sh@27 -- # seq 1 20 00:21:08.898 15:39:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:08.898 15:39:39 -- target/connect_stress.sh@28 -- # cat 00:21:08.898 15:39:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:08.898 15:39:39 -- target/connect_stress.sh@28 -- # cat 00:21:08.898 15:39:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:08.898 15:39:39 -- target/connect_stress.sh@28 -- # cat 00:21:08.898 15:39:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:08.898 15:39:39 -- target/connect_stress.sh@28 -- # cat 00:21:08.898 15:39:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:08.898 15:39:39 -- target/connect_stress.sh@28 -- # cat 00:21:08.898 15:39:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:08.898 15:39:39 -- target/connect_stress.sh@28 -- # cat 00:21:08.898 15:39:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:08.898 15:39:39 -- target/connect_stress.sh@28 -- # cat 00:21:08.898 15:39:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:08.898 15:39:39 -- target/connect_stress.sh@28 -- # cat 00:21:08.898 15:39:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:08.898 15:39:39 -- target/connect_stress.sh@28 -- # cat 00:21:08.898 15:39:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:08.898 15:39:39 -- target/connect_stress.sh@28 -- # cat 00:21:08.898 15:39:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:08.898 15:39:39 -- target/connect_stress.sh@28 -- # cat 00:21:08.898 15:39:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:08.898 15:39:39 -- target/connect_stress.sh@28 -- # cat 00:21:08.898 15:39:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:08.898 15:39:39 -- target/connect_stress.sh@28 -- # cat 00:21:08.898 15:39:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:08.898 15:39:39 -- target/connect_stress.sh@28 -- # cat 00:21:08.898 15:39:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:08.898 15:39:39 -- target/connect_stress.sh@28 -- # cat 00:21:08.898 15:39:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:08.898 15:39:39 -- target/connect_stress.sh@28 -- # cat 00:21:08.898 15:39:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:08.898 15:39:39 -- target/connect_stress.sh@28 -- # cat 00:21:08.898 15:39:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:08.898 15:39:39 -- target/connect_stress.sh@28 -- # cat 00:21:08.898 15:39:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:08.898 15:39:39 -- target/connect_stress.sh@28 -- # cat 00:21:09.157 15:39:39 -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:21:09.157 15:39:39 -- target/connect_stress.sh@28 -- # cat 00:21:09.157 15:39:39 -- target/connect_stress.sh@34 -- # kill -0 69907 00:21:09.157 15:39:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:09.157 15:39:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:09.157 15:39:39 -- common/autotest_common.sh@10 -- # set +x 00:21:09.415 15:39:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:09.415 15:39:39 -- target/connect_stress.sh@34 -- # kill -0 69907 00:21:09.415 15:39:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:09.415 15:39:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:09.415 15:39:39 -- common/autotest_common.sh@10 -- # set +x 00:21:09.672 15:39:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:09.672 15:39:39 -- target/connect_stress.sh@34 -- # kill -0 69907 00:21:09.672 15:39:39 -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:09.672 15:39:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:09.672 15:39:39 -- common/autotest_common.sh@10 -- # set +x 00:21:09.930 15:39:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:09.930 15:39:40 -- target/connect_stress.sh@34 -- # kill -0 69907 00:21:09.930 15:39:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:09.930 15:39:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:09.930 15:39:40 -- common/autotest_common.sh@10 -- # set +x 00:21:10.496 15:39:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:10.496 15:39:40 -- target/connect_stress.sh@34 -- # kill -0 69907 00:21:10.496 15:39:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:10.496 15:39:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:10.496 15:39:40 -- common/autotest_common.sh@10 -- # set +x 00:21:10.754 15:39:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:10.754 15:39:40 -- target/connect_stress.sh@34 -- # kill -0 69907 00:21:10.754 15:39:40 -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:10.754 15:39:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:10.754 15:39:40 -- common/autotest_common.sh@10 -- # set +x 00:21:11.012 15:39:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:11.012 15:39:41 -- target/connect_stress.sh@34 -- # kill -0 69907 00:21:11.012 15:39:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:11.012 15:39:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:11.012 15:39:41 -- common/autotest_common.sh@10 -- # set +x 00:21:11.269 15:39:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:11.269 15:39:41 -- target/connect_stress.sh@34 -- # kill -0 69907 00:21:11.269 15:39:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:11.269 15:39:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:11.269 15:39:41 -- common/autotest_common.sh@10 -- # set +x 00:21:11.526 15:39:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:11.526 15:39:41 -- target/connect_stress.sh@34 -- # kill -0 69907 00:21:11.526 15:39:41 -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:11.526 15:39:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:11.526 15:39:41 -- common/autotest_common.sh@10 -- # set +x 00:21:12.117 15:39:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:12.117 15:39:42 -- target/connect_stress.sh@34 -- # kill -0 69907 00:21:12.117 15:39:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:12.117 15:39:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:12.117 15:39:42 -- common/autotest_common.sh@10 -- # set +x 00:21:12.448 15:39:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:12.448 15:39:42 -- target/connect_stress.sh@34 -- # kill -0 69907 00:21:12.448 15:39:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:12.448 15:39:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:12.448 15:39:42 -- common/autotest_common.sh@10 -- # set +x 00:21:12.705 15:39:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:12.705 15:39:42 -- target/connect_stress.sh@34 -- # kill -0 69907 00:21:12.705 15:39:42 -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:12.705 15:39:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:12.705 15:39:42 -- common/autotest_common.sh@10 -- # set +x 00:21:12.963 15:39:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:12.963 15:39:43 -- target/connect_stress.sh@34 -- # kill -0 69907 00:21:12.963 15:39:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:12.963 15:39:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:12.963 15:39:43 -- common/autotest_common.sh@10 -- # set +x 00:21:13.219 15:39:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:13.219 15:39:43 -- target/connect_stress.sh@34 -- # kill -0 69907 00:21:13.219 15:39:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:13.219 15:39:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:13.219 15:39:43 -- common/autotest_common.sh@10 -- # set +x 00:21:13.478 15:39:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:13.478 15:39:43 -- target/connect_stress.sh@34 -- # kill -0 69907 00:21:13.478 15:39:43 -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:13.478 15:39:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:13.478 15:39:43 -- common/autotest_common.sh@10 -- # set +x 00:21:14.043 15:39:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:14.043 15:39:44 -- target/connect_stress.sh@34 -- # kill -0 69907 00:21:14.043 15:39:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:14.043 15:39:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:14.043 15:39:44 -- common/autotest_common.sh@10 -- # set +x 00:21:14.301 15:39:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:14.301 15:39:44 -- target/connect_stress.sh@34 -- # kill -0 69907 00:21:14.301 15:39:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:14.301 15:39:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:14.301 15:39:44 -- common/autotest_common.sh@10 -- # set +x 00:21:14.559 15:39:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:14.559 15:39:44 -- target/connect_stress.sh@34 -- # kill -0 69907 00:21:14.559 15:39:44 -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:14.559 15:39:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:14.559 15:39:44 -- common/autotest_common.sh@10 -- # set +x 00:21:14.817 15:39:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:14.817 15:39:45 -- target/connect_stress.sh@34 -- # kill -0 69907 00:21:14.817 15:39:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:14.817 15:39:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:14.817 15:39:45 -- common/autotest_common.sh@10 -- # set +x 00:21:15.075 15:39:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:15.075 15:39:45 -- target/connect_stress.sh@34 -- # kill -0 69907 00:21:15.075 15:39:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:15.075 15:39:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:15.075 15:39:45 -- common/autotest_common.sh@10 -- # set +x 00:21:15.640 15:39:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:15.640 15:39:45 -- target/connect_stress.sh@34 -- # kill -0 69907 00:21:15.640 15:39:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:15.640 15:39:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:15.640 15:39:45 -- common/autotest_common.sh@10 -- # set +x 00:21:15.898 15:39:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:15.898 15:39:45 -- target/connect_stress.sh@34 -- # kill -0 69907 00:21:15.899 15:39:45 -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:15.899 15:39:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:15.899 15:39:45 -- common/autotest_common.sh@10 -- # set +x 00:21:16.157 15:39:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:16.157 15:39:46 -- target/connect_stress.sh@34 -- # kill -0 69907 00:21:16.157 15:39:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:16.157 15:39:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:16.157 15:39:46 -- common/autotest_common.sh@10 -- # set +x 00:21:16.415 15:39:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:16.415 15:39:46 -- target/connect_stress.sh@34 -- # kill -0 69907 00:21:16.415 15:39:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:16.415 15:39:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:16.415 15:39:46 -- common/autotest_common.sh@10 -- # set +x 00:21:16.673 15:39:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:16.673 15:39:46 -- target/connect_stress.sh@34 -- # kill -0 69907 00:21:16.673 15:39:46 -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:16.673 15:39:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:16.673 15:39:46 -- common/autotest_common.sh@10 -- # set +x 00:21:17.264 15:39:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:17.264 15:39:47 -- target/connect_stress.sh@34 -- # kill -0 69907 00:21:17.264 15:39:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:17.264 15:39:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:17.264 15:39:47 -- common/autotest_common.sh@10 -- # set +x 00:21:17.531 15:39:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:17.531 15:39:47 -- target/connect_stress.sh@34 -- # kill -0 69907 00:21:17.531 15:39:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:17.531 15:39:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:17.531 15:39:47 -- common/autotest_common.sh@10 -- # set +x 00:21:17.790 15:39:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:17.790 15:39:47 -- target/connect_stress.sh@34 -- # kill -0 69907 00:21:17.790 15:39:47 -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:17.790 15:39:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:17.790 15:39:47 -- common/autotest_common.sh@10 -- # set +x 00:21:18.060 15:39:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:18.060 15:39:48 -- target/connect_stress.sh@34 -- # kill -0 69907 00:21:18.060 15:39:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:18.060 15:39:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:18.060 15:39:48 -- common/autotest_common.sh@10 -- # set +x 00:21:18.329 15:39:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:18.329 15:39:48 -- target/connect_stress.sh@34 -- # kill -0 69907 00:21:18.329 15:39:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:18.329 15:39:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:18.329 15:39:48 -- common/autotest_common.sh@10 -- # set +x 00:21:18.898 15:39:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:18.898 15:39:48 -- target/connect_stress.sh@34 -- # kill -0 69907 00:21:18.898 15:39:48 -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:18.898 15:39:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:18.898 15:39:48 -- common/autotest_common.sh@10 -- # set +x 00:21:19.156 15:39:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:19.156 15:39:49 -- target/connect_stress.sh@34 -- # kill -0 69907 00:21:19.156 15:39:49 -- target/connect_stress.sh@35 -- # rpc_cmd 00:21:19.156 15:39:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:19.156 15:39:49 -- common/autotest_common.sh@10 -- # set +x 00:21:19.156 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:19.416 15:39:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:19.416 15:39:49 -- target/connect_stress.sh@34 -- # kill -0 69907 00:21:19.416 /home/vagrant/spdk_repo/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (69907) - No such process 00:21:19.416 15:39:49 -- target/connect_stress.sh@38 -- # wait 69907 00:21:19.416 15:39:49 -- target/connect_stress.sh@39 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpc.txt 00:21:19.416 15:39:49 -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:21:19.416 15:39:49 -- target/connect_stress.sh@43 -- # nvmftestfini 00:21:19.416 15:39:49 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:19.416 15:39:49 -- nvmf/common.sh@117 -- # sync 00:21:19.416 15:39:49 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:19.416 15:39:49 -- nvmf/common.sh@120 -- # set +e 00:21:19.416 15:39:49 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:19.416 15:39:49 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:19.416 rmmod nvme_tcp 00:21:19.416 rmmod nvme_fabrics 00:21:19.416 rmmod nvme_keyring 00:21:19.416 15:39:49 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:19.416 15:39:49 -- nvmf/common.sh@124 -- # set -e 00:21:19.416 15:39:49 -- nvmf/common.sh@125 -- # return 0 00:21:19.416 15:39:49 -- nvmf/common.sh@478 -- # '[' -n 69855 ']' 00:21:19.416 15:39:49 -- nvmf/common.sh@479 -- # killprocess 69855 00:21:19.416 15:39:49 -- common/autotest_common.sh@936 -- # '[' -z 69855 ']' 00:21:19.416 15:39:49 -- common/autotest_common.sh@940 -- # kill -0 69855 00:21:19.416 15:39:49 -- common/autotest_common.sh@941 -- # uname 00:21:19.416 15:39:49 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:19.416 15:39:49 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 69855 00:21:19.416 15:39:49 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:19.416 15:39:49 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:19.416 15:39:49 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 69855' 00:21:19.416 killing process with pid 69855 00:21:19.416 15:39:49 -- common/autotest_common.sh@955 -- # kill 69855 00:21:19.416 15:39:49 -- common/autotest_common.sh@960 -- # wait 69855 00:21:19.682 15:39:49 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:19.682 15:39:49 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:19.682 15:39:49 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:19.682 15:39:49 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:19.682 15:39:49 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:19.682 15:39:49 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:19.682 15:39:49 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:19.682 15:39:49 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:19.682 15:39:49 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:19.682 00:21:19.682 real 0m12.360s 00:21:19.682 user 0m40.908s 00:21:19.682 sys 0m3.296s 00:21:19.682 15:39:49 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:19.682 ************************************ 00:21:19.682 END TEST nvmf_connect_stress 00:21:19.682 ************************************ 00:21:19.682 15:39:49 -- common/autotest_common.sh@10 -- # set +x 00:21:19.950 15:39:49 -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:21:19.950 15:39:49 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:19.950 15:39:49 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:19.950 15:39:49 -- common/autotest_common.sh@10 -- # set +x 00:21:19.950 ************************************ 00:21:19.950 START TEST nvmf_fused_ordering 00:21:19.950 ************************************ 00:21:19.950 15:39:50 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:21:19.950 * Looking for test storage... 00:21:19.950 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:19.950 15:39:50 -- target/fused_ordering.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:19.950 15:39:50 -- nvmf/common.sh@7 -- # uname -s 00:21:19.950 15:39:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:19.950 15:39:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:19.950 15:39:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:19.950 15:39:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:19.950 15:39:50 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:19.950 15:39:50 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:19.950 15:39:50 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:19.950 15:39:50 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:19.950 15:39:50 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:19.950 15:39:50 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:19.950 15:39:50 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:21:19.950 15:39:50 -- nvmf/common.sh@18 -- # NVME_HOSTID=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:21:19.950 15:39:50 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:19.950 15:39:50 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:19.950 15:39:50 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:19.950 15:39:50 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:19.950 15:39:50 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:19.950 15:39:50 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:19.950 15:39:50 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:19.950 15:39:50 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:19.950 15:39:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.950 15:39:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.950 15:39:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.950 15:39:50 -- paths/export.sh@5 -- # export PATH 00:21:19.950 15:39:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.950 15:39:50 -- nvmf/common.sh@47 -- # : 0 00:21:19.950 15:39:50 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:19.950 15:39:50 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:19.950 15:39:50 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:19.950 15:39:50 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:19.950 15:39:50 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:19.950 15:39:50 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:19.950 15:39:50 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:19.950 15:39:50 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:19.950 15:39:50 -- target/fused_ordering.sh@12 -- # nvmftestinit 00:21:19.950 15:39:50 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:19.950 15:39:50 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:19.950 15:39:50 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:19.950 15:39:50 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:19.950 15:39:50 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:19.950 15:39:50 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:19.950 15:39:50 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:19.950 15:39:50 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:19.951 15:39:50 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:21:19.951 15:39:50 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:21:19.951 15:39:50 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:21:19.951 15:39:50 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:21:19.951 15:39:50 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:21:19.951 15:39:50 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:21:19.951 15:39:50 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:19.951 15:39:50 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:19.951 15:39:50 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:19.951 15:39:50 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:19.951 15:39:50 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:19.951 15:39:50 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:19.951 15:39:50 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:19.951 15:39:50 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:19.951 15:39:50 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:19.951 15:39:50 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:19.951 15:39:50 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:19.951 15:39:50 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:19.951 15:39:50 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:19.951 15:39:50 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:19.951 Cannot find device "nvmf_tgt_br" 00:21:19.951 15:39:50 -- nvmf/common.sh@155 -- # true 00:21:19.951 15:39:50 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:19.951 Cannot find device "nvmf_tgt_br2" 00:21:19.951 15:39:50 -- nvmf/common.sh@156 -- # true 00:21:19.951 15:39:50 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:20.209 15:39:50 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:20.209 Cannot find device "nvmf_tgt_br" 00:21:20.209 15:39:50 -- nvmf/common.sh@158 -- # true 00:21:20.209 15:39:50 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:20.209 Cannot find device "nvmf_tgt_br2" 00:21:20.209 15:39:50 -- nvmf/common.sh@159 -- # true 00:21:20.209 15:39:50 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:20.209 15:39:50 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:20.209 15:39:50 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:20.209 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:20.209 15:39:50 -- nvmf/common.sh@162 -- # true 00:21:20.209 15:39:50 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:20.209 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:20.209 15:39:50 -- nvmf/common.sh@163 -- # true 00:21:20.209 15:39:50 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:20.209 15:39:50 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:20.209 15:39:50 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:20.209 15:39:50 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:20.209 15:39:50 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:20.209 15:39:50 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:20.210 15:39:50 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:20.210 15:39:50 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:20.210 15:39:50 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:20.210 15:39:50 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:20.210 15:39:50 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:20.210 15:39:50 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:20.210 15:39:50 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:20.210 15:39:50 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:20.210 15:39:50 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:20.210 15:39:50 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:20.210 15:39:50 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:20.210 15:39:50 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:20.210 15:39:50 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:20.210 15:39:50 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:20.210 15:39:50 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:20.471 15:39:50 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:20.471 15:39:50 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:20.472 15:39:50 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:20.472 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:20.472 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.113 ms 00:21:20.472 00:21:20.472 --- 10.0.0.2 ping statistics --- 00:21:20.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:20.472 rtt min/avg/max/mdev = 0.113/0.113/0.113/0.000 ms 00:21:20.472 15:39:50 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:20.472 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:20.472 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:21:20.472 00:21:20.472 --- 10.0.0.3 ping statistics --- 00:21:20.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:20.472 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:21:20.472 15:39:50 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:20.472 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:20.472 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:21:20.472 00:21:20.472 --- 10.0.0.1 ping statistics --- 00:21:20.472 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:20.472 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:21:20.472 15:39:50 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:20.472 15:39:50 -- nvmf/common.sh@422 -- # return 0 00:21:20.472 15:39:50 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:20.472 15:39:50 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:20.472 15:39:50 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:20.472 15:39:50 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:20.472 15:39:50 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:20.472 15:39:50 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:20.472 15:39:50 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:20.472 15:39:50 -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:21:20.472 15:39:50 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:20.472 15:39:50 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:20.472 15:39:50 -- common/autotest_common.sh@10 -- # set +x 00:21:20.472 15:39:50 -- nvmf/common.sh@470 -- # nvmfpid=70233 00:21:20.472 15:39:50 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:20.472 15:39:50 -- nvmf/common.sh@471 -- # waitforlisten 70233 00:21:20.472 15:39:50 -- common/autotest_common.sh@817 -- # '[' -z 70233 ']' 00:21:20.472 15:39:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:20.472 15:39:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:20.472 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:20.472 15:39:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:20.472 15:39:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:20.472 15:39:50 -- common/autotest_common.sh@10 -- # set +x 00:21:20.472 [2024-04-26 15:39:50.620827] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:21:20.472 [2024-04-26 15:39:50.621056] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:20.472 [2024-04-26 15:39:50.760023] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:20.731 [2024-04-26 15:39:50.890618] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:20.731 [2024-04-26 15:39:50.890680] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:20.731 [2024-04-26 15:39:50.890695] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:20.731 [2024-04-26 15:39:50.890705] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:20.731 [2024-04-26 15:39:50.890715] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:20.731 [2024-04-26 15:39:50.890756] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:21.665 15:39:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:21.665 15:39:51 -- common/autotest_common.sh@850 -- # return 0 00:21:21.665 15:39:51 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:21.665 15:39:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:21.665 15:39:51 -- common/autotest_common.sh@10 -- # set +x 00:21:21.665 15:39:51 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:21.665 15:39:51 -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:21.665 15:39:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:21.665 15:39:51 -- common/autotest_common.sh@10 -- # set +x 00:21:21.665 [2024-04-26 15:39:51.724043] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:21.665 15:39:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:21.665 15:39:51 -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:21:21.665 15:39:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:21.665 15:39:51 -- common/autotest_common.sh@10 -- # set +x 00:21:21.665 15:39:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:21.665 15:39:51 -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:21.665 15:39:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:21.665 15:39:51 -- common/autotest_common.sh@10 -- # set +x 00:21:21.665 [2024-04-26 15:39:51.740159] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:21.665 15:39:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:21.665 15:39:51 -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:21:21.665 15:39:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:21.665 15:39:51 -- common/autotest_common.sh@10 -- # set +x 00:21:21.665 NULL1 00:21:21.665 15:39:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:21.665 15:39:51 -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:21:21.665 15:39:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:21.665 15:39:51 -- common/autotest_common.sh@10 -- # set +x 00:21:21.665 15:39:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:21.665 15:39:51 -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:21:21.665 15:39:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:21.665 15:39:51 -- common/autotest_common.sh@10 -- # set +x 00:21:21.665 15:39:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:21.665 15:39:51 -- target/fused_ordering.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:21.665 [2024-04-26 15:39:51.791724] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:21:21.665 [2024-04-26 15:39:51.791770] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70288 ] 00:21:22.230 Attached to nqn.2016-06.io.spdk:cnode1 00:21:22.230 Namespace ID: 1 size: 1GB 00:21:22.230 fused_ordering(0) 00:21:22.230 fused_ordering(1) 00:21:22.230 fused_ordering(2) 00:21:22.230 fused_ordering(3) 00:21:22.230 fused_ordering(4) 00:21:22.230 fused_ordering(5) 00:21:22.230 fused_ordering(6) 00:21:22.230 fused_ordering(7) 00:21:22.230 fused_ordering(8) 00:21:22.230 fused_ordering(9) 00:21:22.230 fused_ordering(10) 00:21:22.230 fused_ordering(11) 00:21:22.230 fused_ordering(12) 00:21:22.230 fused_ordering(13) 00:21:22.230 fused_ordering(14) 00:21:22.230 fused_ordering(15) 00:21:22.230 fused_ordering(16) 00:21:22.230 fused_ordering(17) 00:21:22.230 fused_ordering(18) 00:21:22.230 fused_ordering(19) 00:21:22.230 fused_ordering(20) 00:21:22.230 fused_ordering(21) 00:21:22.230 fused_ordering(22) 00:21:22.230 fused_ordering(23) 00:21:22.230 fused_ordering(24) 00:21:22.230 fused_ordering(25) 00:21:22.230 fused_ordering(26) 00:21:22.230 fused_ordering(27) 00:21:22.230 fused_ordering(28) 00:21:22.230 fused_ordering(29) 00:21:22.230 fused_ordering(30) 00:21:22.230 fused_ordering(31) 00:21:22.230 fused_ordering(32) 00:21:22.230 fused_ordering(33) 00:21:22.230 fused_ordering(34) 00:21:22.230 fused_ordering(35) 00:21:22.230 fused_ordering(36) 00:21:22.230 fused_ordering(37) 00:21:22.230 fused_ordering(38) 00:21:22.230 fused_ordering(39) 00:21:22.230 fused_ordering(40) 00:21:22.230 fused_ordering(41) 00:21:22.230 fused_ordering(42) 00:21:22.230 fused_ordering(43) 00:21:22.230 fused_ordering(44) 00:21:22.230 fused_ordering(45) 00:21:22.230 fused_ordering(46) 00:21:22.230 fused_ordering(47) 00:21:22.230 fused_ordering(48) 00:21:22.230 fused_ordering(49) 00:21:22.230 fused_ordering(50) 00:21:22.230 fused_ordering(51) 00:21:22.230 fused_ordering(52) 00:21:22.230 fused_ordering(53) 00:21:22.230 fused_ordering(54) 00:21:22.230 fused_ordering(55) 00:21:22.230 fused_ordering(56) 00:21:22.230 fused_ordering(57) 00:21:22.230 fused_ordering(58) 00:21:22.230 fused_ordering(59) 00:21:22.230 fused_ordering(60) 00:21:22.230 fused_ordering(61) 00:21:22.230 fused_ordering(62) 00:21:22.230 fused_ordering(63) 00:21:22.230 fused_ordering(64) 00:21:22.230 fused_ordering(65) 00:21:22.230 fused_ordering(66) 00:21:22.231 fused_ordering(67) 00:21:22.231 fused_ordering(68) 00:21:22.231 fused_ordering(69) 00:21:22.231 fused_ordering(70) 00:21:22.231 fused_ordering(71) 00:21:22.231 fused_ordering(72) 00:21:22.231 fused_ordering(73) 00:21:22.231 fused_ordering(74) 00:21:22.231 fused_ordering(75) 00:21:22.231 fused_ordering(76) 00:21:22.231 fused_ordering(77) 00:21:22.231 fused_ordering(78) 00:21:22.231 fused_ordering(79) 00:21:22.231 fused_ordering(80) 00:21:22.231 fused_ordering(81) 00:21:22.231 fused_ordering(82) 00:21:22.231 fused_ordering(83) 00:21:22.231 fused_ordering(84) 00:21:22.231 fused_ordering(85) 00:21:22.231 fused_ordering(86) 00:21:22.231 fused_ordering(87) 00:21:22.231 fused_ordering(88) 00:21:22.231 fused_ordering(89) 00:21:22.231 fused_ordering(90) 00:21:22.231 fused_ordering(91) 00:21:22.231 fused_ordering(92) 00:21:22.231 fused_ordering(93) 00:21:22.231 fused_ordering(94) 00:21:22.231 fused_ordering(95) 00:21:22.231 fused_ordering(96) 00:21:22.231 fused_ordering(97) 00:21:22.231 fused_ordering(98) 00:21:22.231 fused_ordering(99) 00:21:22.231 fused_ordering(100) 00:21:22.231 fused_ordering(101) 00:21:22.231 fused_ordering(102) 00:21:22.231 fused_ordering(103) 00:21:22.231 fused_ordering(104) 00:21:22.231 fused_ordering(105) 00:21:22.231 fused_ordering(106) 00:21:22.231 fused_ordering(107) 00:21:22.231 fused_ordering(108) 00:21:22.231 fused_ordering(109) 00:21:22.231 fused_ordering(110) 00:21:22.231 fused_ordering(111) 00:21:22.231 fused_ordering(112) 00:21:22.231 fused_ordering(113) 00:21:22.231 fused_ordering(114) 00:21:22.231 fused_ordering(115) 00:21:22.231 fused_ordering(116) 00:21:22.231 fused_ordering(117) 00:21:22.231 fused_ordering(118) 00:21:22.231 fused_ordering(119) 00:21:22.231 fused_ordering(120) 00:21:22.231 fused_ordering(121) 00:21:22.231 fused_ordering(122) 00:21:22.231 fused_ordering(123) 00:21:22.231 fused_ordering(124) 00:21:22.231 fused_ordering(125) 00:21:22.231 fused_ordering(126) 00:21:22.231 fused_ordering(127) 00:21:22.231 fused_ordering(128) 00:21:22.231 fused_ordering(129) 00:21:22.231 fused_ordering(130) 00:21:22.231 fused_ordering(131) 00:21:22.231 fused_ordering(132) 00:21:22.231 fused_ordering(133) 00:21:22.231 fused_ordering(134) 00:21:22.231 fused_ordering(135) 00:21:22.231 fused_ordering(136) 00:21:22.231 fused_ordering(137) 00:21:22.231 fused_ordering(138) 00:21:22.231 fused_ordering(139) 00:21:22.231 fused_ordering(140) 00:21:22.231 fused_ordering(141) 00:21:22.231 fused_ordering(142) 00:21:22.231 fused_ordering(143) 00:21:22.231 fused_ordering(144) 00:21:22.231 fused_ordering(145) 00:21:22.231 fused_ordering(146) 00:21:22.231 fused_ordering(147) 00:21:22.231 fused_ordering(148) 00:21:22.231 fused_ordering(149) 00:21:22.231 fused_ordering(150) 00:21:22.231 fused_ordering(151) 00:21:22.231 fused_ordering(152) 00:21:22.231 fused_ordering(153) 00:21:22.231 fused_ordering(154) 00:21:22.231 fused_ordering(155) 00:21:22.231 fused_ordering(156) 00:21:22.231 fused_ordering(157) 00:21:22.231 fused_ordering(158) 00:21:22.231 fused_ordering(159) 00:21:22.231 fused_ordering(160) 00:21:22.231 fused_ordering(161) 00:21:22.231 fused_ordering(162) 00:21:22.231 fused_ordering(163) 00:21:22.231 fused_ordering(164) 00:21:22.231 fused_ordering(165) 00:21:22.231 fused_ordering(166) 00:21:22.231 fused_ordering(167) 00:21:22.231 fused_ordering(168) 00:21:22.231 fused_ordering(169) 00:21:22.231 fused_ordering(170) 00:21:22.231 fused_ordering(171) 00:21:22.231 fused_ordering(172) 00:21:22.231 fused_ordering(173) 00:21:22.231 fused_ordering(174) 00:21:22.231 fused_ordering(175) 00:21:22.231 fused_ordering(176) 00:21:22.231 fused_ordering(177) 00:21:22.231 fused_ordering(178) 00:21:22.231 fused_ordering(179) 00:21:22.231 fused_ordering(180) 00:21:22.231 fused_ordering(181) 00:21:22.231 fused_ordering(182) 00:21:22.231 fused_ordering(183) 00:21:22.231 fused_ordering(184) 00:21:22.231 fused_ordering(185) 00:21:22.231 fused_ordering(186) 00:21:22.231 fused_ordering(187) 00:21:22.231 fused_ordering(188) 00:21:22.231 fused_ordering(189) 00:21:22.231 fused_ordering(190) 00:21:22.231 fused_ordering(191) 00:21:22.231 fused_ordering(192) 00:21:22.231 fused_ordering(193) 00:21:22.231 fused_ordering(194) 00:21:22.231 fused_ordering(195) 00:21:22.231 fused_ordering(196) 00:21:22.231 fused_ordering(197) 00:21:22.231 fused_ordering(198) 00:21:22.231 fused_ordering(199) 00:21:22.231 fused_ordering(200) 00:21:22.231 fused_ordering(201) 00:21:22.231 fused_ordering(202) 00:21:22.231 fused_ordering(203) 00:21:22.231 fused_ordering(204) 00:21:22.231 fused_ordering(205) 00:21:22.231 fused_ordering(206) 00:21:22.231 fused_ordering(207) 00:21:22.231 fused_ordering(208) 00:21:22.231 fused_ordering(209) 00:21:22.231 fused_ordering(210) 00:21:22.231 fused_ordering(211) 00:21:22.231 fused_ordering(212) 00:21:22.231 fused_ordering(213) 00:21:22.231 fused_ordering(214) 00:21:22.231 fused_ordering(215) 00:21:22.231 fused_ordering(216) 00:21:22.231 fused_ordering(217) 00:21:22.231 fused_ordering(218) 00:21:22.231 fused_ordering(219) 00:21:22.231 fused_ordering(220) 00:21:22.231 fused_ordering(221) 00:21:22.231 fused_ordering(222) 00:21:22.231 fused_ordering(223) 00:21:22.231 fused_ordering(224) 00:21:22.231 fused_ordering(225) 00:21:22.231 fused_ordering(226) 00:21:22.231 fused_ordering(227) 00:21:22.231 fused_ordering(228) 00:21:22.231 fused_ordering(229) 00:21:22.231 fused_ordering(230) 00:21:22.231 fused_ordering(231) 00:21:22.231 fused_ordering(232) 00:21:22.231 fused_ordering(233) 00:21:22.231 fused_ordering(234) 00:21:22.231 fused_ordering(235) 00:21:22.231 fused_ordering(236) 00:21:22.231 fused_ordering(237) 00:21:22.231 fused_ordering(238) 00:21:22.231 fused_ordering(239) 00:21:22.231 fused_ordering(240) 00:21:22.231 fused_ordering(241) 00:21:22.231 fused_ordering(242) 00:21:22.231 fused_ordering(243) 00:21:22.231 fused_ordering(244) 00:21:22.231 fused_ordering(245) 00:21:22.231 fused_ordering(246) 00:21:22.231 fused_ordering(247) 00:21:22.231 fused_ordering(248) 00:21:22.231 fused_ordering(249) 00:21:22.231 fused_ordering(250) 00:21:22.231 fused_ordering(251) 00:21:22.231 fused_ordering(252) 00:21:22.231 fused_ordering(253) 00:21:22.231 fused_ordering(254) 00:21:22.231 fused_ordering(255) 00:21:22.231 fused_ordering(256) 00:21:22.231 fused_ordering(257) 00:21:22.231 fused_ordering(258) 00:21:22.231 fused_ordering(259) 00:21:22.231 fused_ordering(260) 00:21:22.231 fused_ordering(261) 00:21:22.231 fused_ordering(262) 00:21:22.231 fused_ordering(263) 00:21:22.231 fused_ordering(264) 00:21:22.231 fused_ordering(265) 00:21:22.231 fused_ordering(266) 00:21:22.231 fused_ordering(267) 00:21:22.231 fused_ordering(268) 00:21:22.231 fused_ordering(269) 00:21:22.231 fused_ordering(270) 00:21:22.231 fused_ordering(271) 00:21:22.231 fused_ordering(272) 00:21:22.231 fused_ordering(273) 00:21:22.231 fused_ordering(274) 00:21:22.231 fused_ordering(275) 00:21:22.231 fused_ordering(276) 00:21:22.231 fused_ordering(277) 00:21:22.231 fused_ordering(278) 00:21:22.231 fused_ordering(279) 00:21:22.231 fused_ordering(280) 00:21:22.231 fused_ordering(281) 00:21:22.231 fused_ordering(282) 00:21:22.231 fused_ordering(283) 00:21:22.231 fused_ordering(284) 00:21:22.231 fused_ordering(285) 00:21:22.231 fused_ordering(286) 00:21:22.231 fused_ordering(287) 00:21:22.231 fused_ordering(288) 00:21:22.231 fused_ordering(289) 00:21:22.231 fused_ordering(290) 00:21:22.231 fused_ordering(291) 00:21:22.231 fused_ordering(292) 00:21:22.231 fused_ordering(293) 00:21:22.231 fused_ordering(294) 00:21:22.231 fused_ordering(295) 00:21:22.231 fused_ordering(296) 00:21:22.231 fused_ordering(297) 00:21:22.231 fused_ordering(298) 00:21:22.231 fused_ordering(299) 00:21:22.231 fused_ordering(300) 00:21:22.231 fused_ordering(301) 00:21:22.231 fused_ordering(302) 00:21:22.231 fused_ordering(303) 00:21:22.231 fused_ordering(304) 00:21:22.231 fused_ordering(305) 00:21:22.231 fused_ordering(306) 00:21:22.231 fused_ordering(307) 00:21:22.231 fused_ordering(308) 00:21:22.231 fused_ordering(309) 00:21:22.231 fused_ordering(310) 00:21:22.231 fused_ordering(311) 00:21:22.231 fused_ordering(312) 00:21:22.231 fused_ordering(313) 00:21:22.231 fused_ordering(314) 00:21:22.231 fused_ordering(315) 00:21:22.231 fused_ordering(316) 00:21:22.231 fused_ordering(317) 00:21:22.231 fused_ordering(318) 00:21:22.231 fused_ordering(319) 00:21:22.231 fused_ordering(320) 00:21:22.231 fused_ordering(321) 00:21:22.231 fused_ordering(322) 00:21:22.231 fused_ordering(323) 00:21:22.231 fused_ordering(324) 00:21:22.231 fused_ordering(325) 00:21:22.231 fused_ordering(326) 00:21:22.231 fused_ordering(327) 00:21:22.231 fused_ordering(328) 00:21:22.231 fused_ordering(329) 00:21:22.231 fused_ordering(330) 00:21:22.231 fused_ordering(331) 00:21:22.231 fused_ordering(332) 00:21:22.231 fused_ordering(333) 00:21:22.231 fused_ordering(334) 00:21:22.231 fused_ordering(335) 00:21:22.231 fused_ordering(336) 00:21:22.231 fused_ordering(337) 00:21:22.231 fused_ordering(338) 00:21:22.231 fused_ordering(339) 00:21:22.231 fused_ordering(340) 00:21:22.231 fused_ordering(341) 00:21:22.231 fused_ordering(342) 00:21:22.231 fused_ordering(343) 00:21:22.232 fused_ordering(344) 00:21:22.232 fused_ordering(345) 00:21:22.232 fused_ordering(346) 00:21:22.232 fused_ordering(347) 00:21:22.232 fused_ordering(348) 00:21:22.232 fused_ordering(349) 00:21:22.232 fused_ordering(350) 00:21:22.232 fused_ordering(351) 00:21:22.232 fused_ordering(352) 00:21:22.232 fused_ordering(353) 00:21:22.232 fused_ordering(354) 00:21:22.232 fused_ordering(355) 00:21:22.232 fused_ordering(356) 00:21:22.232 fused_ordering(357) 00:21:22.232 fused_ordering(358) 00:21:22.232 fused_ordering(359) 00:21:22.232 fused_ordering(360) 00:21:22.232 fused_ordering(361) 00:21:22.232 fused_ordering(362) 00:21:22.232 fused_ordering(363) 00:21:22.232 fused_ordering(364) 00:21:22.232 fused_ordering(365) 00:21:22.232 fused_ordering(366) 00:21:22.232 fused_ordering(367) 00:21:22.232 fused_ordering(368) 00:21:22.232 fused_ordering(369) 00:21:22.232 fused_ordering(370) 00:21:22.232 fused_ordering(371) 00:21:22.232 fused_ordering(372) 00:21:22.232 fused_ordering(373) 00:21:22.232 fused_ordering(374) 00:21:22.232 fused_ordering(375) 00:21:22.232 fused_ordering(376) 00:21:22.232 fused_ordering(377) 00:21:22.232 fused_ordering(378) 00:21:22.232 fused_ordering(379) 00:21:22.232 fused_ordering(380) 00:21:22.232 fused_ordering(381) 00:21:22.232 fused_ordering(382) 00:21:22.232 fused_ordering(383) 00:21:22.232 fused_ordering(384) 00:21:22.232 fused_ordering(385) 00:21:22.232 fused_ordering(386) 00:21:22.232 fused_ordering(387) 00:21:22.232 fused_ordering(388) 00:21:22.232 fused_ordering(389) 00:21:22.232 fused_ordering(390) 00:21:22.232 fused_ordering(391) 00:21:22.232 fused_ordering(392) 00:21:22.232 fused_ordering(393) 00:21:22.232 fused_ordering(394) 00:21:22.232 fused_ordering(395) 00:21:22.232 fused_ordering(396) 00:21:22.232 fused_ordering(397) 00:21:22.232 fused_ordering(398) 00:21:22.232 fused_ordering(399) 00:21:22.232 fused_ordering(400) 00:21:22.232 fused_ordering(401) 00:21:22.232 fused_ordering(402) 00:21:22.232 fused_ordering(403) 00:21:22.232 fused_ordering(404) 00:21:22.232 fused_ordering(405) 00:21:22.232 fused_ordering(406) 00:21:22.232 fused_ordering(407) 00:21:22.232 fused_ordering(408) 00:21:22.232 fused_ordering(409) 00:21:22.232 fused_ordering(410) 00:21:22.798 fused_ordering(411) 00:21:22.798 fused_ordering(412) 00:21:22.798 fused_ordering(413) 00:21:22.798 fused_ordering(414) 00:21:22.798 fused_ordering(415) 00:21:22.798 fused_ordering(416) 00:21:22.798 fused_ordering(417) 00:21:22.798 fused_ordering(418) 00:21:22.798 fused_ordering(419) 00:21:22.798 fused_ordering(420) 00:21:22.798 fused_ordering(421) 00:21:22.798 fused_ordering(422) 00:21:22.798 fused_ordering(423) 00:21:22.798 fused_ordering(424) 00:21:22.798 fused_ordering(425) 00:21:22.798 fused_ordering(426) 00:21:22.798 fused_ordering(427) 00:21:22.798 fused_ordering(428) 00:21:22.798 fused_ordering(429) 00:21:22.798 fused_ordering(430) 00:21:22.798 fused_ordering(431) 00:21:22.798 fused_ordering(432) 00:21:22.798 fused_ordering(433) 00:21:22.798 fused_ordering(434) 00:21:22.798 fused_ordering(435) 00:21:22.798 fused_ordering(436) 00:21:22.798 fused_ordering(437) 00:21:22.798 fused_ordering(438) 00:21:22.798 fused_ordering(439) 00:21:22.798 fused_ordering(440) 00:21:22.798 fused_ordering(441) 00:21:22.798 fused_ordering(442) 00:21:22.798 fused_ordering(443) 00:21:22.798 fused_ordering(444) 00:21:22.798 fused_ordering(445) 00:21:22.798 fused_ordering(446) 00:21:22.798 fused_ordering(447) 00:21:22.798 fused_ordering(448) 00:21:22.798 fused_ordering(449) 00:21:22.798 fused_ordering(450) 00:21:22.798 fused_ordering(451) 00:21:22.798 fused_ordering(452) 00:21:22.798 fused_ordering(453) 00:21:22.798 fused_ordering(454) 00:21:22.798 fused_ordering(455) 00:21:22.798 fused_ordering(456) 00:21:22.798 fused_ordering(457) 00:21:22.798 fused_ordering(458) 00:21:22.798 fused_ordering(459) 00:21:22.798 fused_ordering(460) 00:21:22.798 fused_ordering(461) 00:21:22.798 fused_ordering(462) 00:21:22.798 fused_ordering(463) 00:21:22.798 fused_ordering(464) 00:21:22.798 fused_ordering(465) 00:21:22.798 fused_ordering(466) 00:21:22.799 fused_ordering(467) 00:21:22.799 fused_ordering(468) 00:21:22.799 fused_ordering(469) 00:21:22.799 fused_ordering(470) 00:21:22.799 fused_ordering(471) 00:21:22.799 fused_ordering(472) 00:21:22.799 fused_ordering(473) 00:21:22.799 fused_ordering(474) 00:21:22.799 fused_ordering(475) 00:21:22.799 fused_ordering(476) 00:21:22.799 fused_ordering(477) 00:21:22.799 fused_ordering(478) 00:21:22.799 fused_ordering(479) 00:21:22.799 fused_ordering(480) 00:21:22.799 fused_ordering(481) 00:21:22.799 fused_ordering(482) 00:21:22.799 fused_ordering(483) 00:21:22.799 fused_ordering(484) 00:21:22.799 fused_ordering(485) 00:21:22.799 fused_ordering(486) 00:21:22.799 fused_ordering(487) 00:21:22.799 fused_ordering(488) 00:21:22.799 fused_ordering(489) 00:21:22.799 fused_ordering(490) 00:21:22.799 fused_ordering(491) 00:21:22.799 fused_ordering(492) 00:21:22.799 fused_ordering(493) 00:21:22.799 fused_ordering(494) 00:21:22.799 fused_ordering(495) 00:21:22.799 fused_ordering(496) 00:21:22.799 fused_ordering(497) 00:21:22.799 fused_ordering(498) 00:21:22.799 fused_ordering(499) 00:21:22.799 fused_ordering(500) 00:21:22.799 fused_ordering(501) 00:21:22.799 fused_ordering(502) 00:21:22.799 fused_ordering(503) 00:21:22.799 fused_ordering(504) 00:21:22.799 fused_ordering(505) 00:21:22.799 fused_ordering(506) 00:21:22.799 fused_ordering(507) 00:21:22.799 fused_ordering(508) 00:21:22.799 fused_ordering(509) 00:21:22.799 fused_ordering(510) 00:21:22.799 fused_ordering(511) 00:21:22.799 fused_ordering(512) 00:21:22.799 fused_ordering(513) 00:21:22.799 fused_ordering(514) 00:21:22.799 fused_ordering(515) 00:21:22.799 fused_ordering(516) 00:21:22.799 fused_ordering(517) 00:21:22.799 fused_ordering(518) 00:21:22.799 fused_ordering(519) 00:21:22.799 fused_ordering(520) 00:21:22.799 fused_ordering(521) 00:21:22.799 fused_ordering(522) 00:21:22.799 fused_ordering(523) 00:21:22.799 fused_ordering(524) 00:21:22.799 fused_ordering(525) 00:21:22.799 fused_ordering(526) 00:21:22.799 fused_ordering(527) 00:21:22.799 fused_ordering(528) 00:21:22.799 fused_ordering(529) 00:21:22.799 fused_ordering(530) 00:21:22.799 fused_ordering(531) 00:21:22.799 fused_ordering(532) 00:21:22.799 fused_ordering(533) 00:21:22.799 fused_ordering(534) 00:21:22.799 fused_ordering(535) 00:21:22.799 fused_ordering(536) 00:21:22.799 fused_ordering(537) 00:21:22.799 fused_ordering(538) 00:21:22.799 fused_ordering(539) 00:21:22.799 fused_ordering(540) 00:21:22.799 fused_ordering(541) 00:21:22.799 fused_ordering(542) 00:21:22.799 fused_ordering(543) 00:21:22.799 fused_ordering(544) 00:21:22.799 fused_ordering(545) 00:21:22.799 fused_ordering(546) 00:21:22.799 fused_ordering(547) 00:21:22.799 fused_ordering(548) 00:21:22.799 fused_ordering(549) 00:21:22.799 fused_ordering(550) 00:21:22.799 fused_ordering(551) 00:21:22.799 fused_ordering(552) 00:21:22.799 fused_ordering(553) 00:21:22.799 fused_ordering(554) 00:21:22.799 fused_ordering(555) 00:21:22.799 fused_ordering(556) 00:21:22.799 fused_ordering(557) 00:21:22.799 fused_ordering(558) 00:21:22.799 fused_ordering(559) 00:21:22.799 fused_ordering(560) 00:21:22.799 fused_ordering(561) 00:21:22.799 fused_ordering(562) 00:21:22.799 fused_ordering(563) 00:21:22.799 fused_ordering(564) 00:21:22.799 fused_ordering(565) 00:21:22.799 fused_ordering(566) 00:21:22.799 fused_ordering(567) 00:21:22.799 fused_ordering(568) 00:21:22.799 fused_ordering(569) 00:21:22.799 fused_ordering(570) 00:21:22.799 fused_ordering(571) 00:21:22.799 fused_ordering(572) 00:21:22.799 fused_ordering(573) 00:21:22.799 fused_ordering(574) 00:21:22.799 fused_ordering(575) 00:21:22.799 fused_ordering(576) 00:21:22.799 fused_ordering(577) 00:21:22.799 fused_ordering(578) 00:21:22.799 fused_ordering(579) 00:21:22.799 fused_ordering(580) 00:21:22.799 fused_ordering(581) 00:21:22.799 fused_ordering(582) 00:21:22.799 fused_ordering(583) 00:21:22.799 fused_ordering(584) 00:21:22.799 fused_ordering(585) 00:21:22.799 fused_ordering(586) 00:21:22.799 fused_ordering(587) 00:21:22.799 fused_ordering(588) 00:21:22.799 fused_ordering(589) 00:21:22.799 fused_ordering(590) 00:21:22.799 fused_ordering(591) 00:21:22.799 fused_ordering(592) 00:21:22.799 fused_ordering(593) 00:21:22.799 fused_ordering(594) 00:21:22.799 fused_ordering(595) 00:21:22.799 fused_ordering(596) 00:21:22.799 fused_ordering(597) 00:21:22.799 fused_ordering(598) 00:21:22.799 fused_ordering(599) 00:21:22.799 fused_ordering(600) 00:21:22.799 fused_ordering(601) 00:21:22.799 fused_ordering(602) 00:21:22.799 fused_ordering(603) 00:21:22.799 fused_ordering(604) 00:21:22.799 fused_ordering(605) 00:21:22.799 fused_ordering(606) 00:21:22.799 fused_ordering(607) 00:21:22.799 fused_ordering(608) 00:21:22.799 fused_ordering(609) 00:21:22.799 fused_ordering(610) 00:21:22.799 fused_ordering(611) 00:21:22.799 fused_ordering(612) 00:21:22.799 fused_ordering(613) 00:21:22.799 fused_ordering(614) 00:21:22.799 fused_ordering(615) 00:21:23.076 fused_ordering(616) 00:21:23.076 fused_ordering(617) 00:21:23.076 fused_ordering(618) 00:21:23.076 fused_ordering(619) 00:21:23.076 fused_ordering(620) 00:21:23.076 fused_ordering(621) 00:21:23.076 fused_ordering(622) 00:21:23.076 fused_ordering(623) 00:21:23.076 fused_ordering(624) 00:21:23.076 fused_ordering(625) 00:21:23.076 fused_ordering(626) 00:21:23.076 fused_ordering(627) 00:21:23.076 fused_ordering(628) 00:21:23.076 fused_ordering(629) 00:21:23.076 fused_ordering(630) 00:21:23.076 fused_ordering(631) 00:21:23.076 fused_ordering(632) 00:21:23.076 fused_ordering(633) 00:21:23.076 fused_ordering(634) 00:21:23.076 fused_ordering(635) 00:21:23.076 fused_ordering(636) 00:21:23.076 fused_ordering(637) 00:21:23.076 fused_ordering(638) 00:21:23.076 fused_ordering(639) 00:21:23.076 fused_ordering(640) 00:21:23.076 fused_ordering(641) 00:21:23.076 fused_ordering(642) 00:21:23.076 fused_ordering(643) 00:21:23.076 fused_ordering(644) 00:21:23.076 fused_ordering(645) 00:21:23.076 fused_ordering(646) 00:21:23.076 fused_ordering(647) 00:21:23.076 fused_ordering(648) 00:21:23.076 fused_ordering(649) 00:21:23.076 fused_ordering(650) 00:21:23.076 fused_ordering(651) 00:21:23.076 fused_ordering(652) 00:21:23.076 fused_ordering(653) 00:21:23.076 fused_ordering(654) 00:21:23.076 fused_ordering(655) 00:21:23.076 fused_ordering(656) 00:21:23.076 fused_ordering(657) 00:21:23.076 fused_ordering(658) 00:21:23.076 fused_ordering(659) 00:21:23.076 fused_ordering(660) 00:21:23.076 fused_ordering(661) 00:21:23.076 fused_ordering(662) 00:21:23.076 fused_ordering(663) 00:21:23.076 fused_ordering(664) 00:21:23.076 fused_ordering(665) 00:21:23.076 fused_ordering(666) 00:21:23.076 fused_ordering(667) 00:21:23.076 fused_ordering(668) 00:21:23.076 fused_ordering(669) 00:21:23.076 fused_ordering(670) 00:21:23.076 fused_ordering(671) 00:21:23.076 fused_ordering(672) 00:21:23.076 fused_ordering(673) 00:21:23.076 fused_ordering(674) 00:21:23.076 fused_ordering(675) 00:21:23.076 fused_ordering(676) 00:21:23.076 fused_ordering(677) 00:21:23.076 fused_ordering(678) 00:21:23.076 fused_ordering(679) 00:21:23.076 fused_ordering(680) 00:21:23.076 fused_ordering(681) 00:21:23.076 fused_ordering(682) 00:21:23.076 fused_ordering(683) 00:21:23.076 fused_ordering(684) 00:21:23.076 fused_ordering(685) 00:21:23.076 fused_ordering(686) 00:21:23.076 fused_ordering(687) 00:21:23.076 fused_ordering(688) 00:21:23.076 fused_ordering(689) 00:21:23.076 fused_ordering(690) 00:21:23.076 fused_ordering(691) 00:21:23.076 fused_ordering(692) 00:21:23.076 fused_ordering(693) 00:21:23.076 fused_ordering(694) 00:21:23.076 fused_ordering(695) 00:21:23.076 fused_ordering(696) 00:21:23.076 fused_ordering(697) 00:21:23.076 fused_ordering(698) 00:21:23.076 fused_ordering(699) 00:21:23.076 fused_ordering(700) 00:21:23.076 fused_ordering(701) 00:21:23.076 fused_ordering(702) 00:21:23.076 fused_ordering(703) 00:21:23.076 fused_ordering(704) 00:21:23.076 fused_ordering(705) 00:21:23.076 fused_ordering(706) 00:21:23.076 fused_ordering(707) 00:21:23.076 fused_ordering(708) 00:21:23.076 fused_ordering(709) 00:21:23.076 fused_ordering(710) 00:21:23.076 fused_ordering(711) 00:21:23.076 fused_ordering(712) 00:21:23.076 fused_ordering(713) 00:21:23.076 fused_ordering(714) 00:21:23.076 fused_ordering(715) 00:21:23.076 fused_ordering(716) 00:21:23.076 fused_ordering(717) 00:21:23.076 fused_ordering(718) 00:21:23.076 fused_ordering(719) 00:21:23.076 fused_ordering(720) 00:21:23.076 fused_ordering(721) 00:21:23.076 fused_ordering(722) 00:21:23.076 fused_ordering(723) 00:21:23.076 fused_ordering(724) 00:21:23.076 fused_ordering(725) 00:21:23.076 fused_ordering(726) 00:21:23.076 fused_ordering(727) 00:21:23.076 fused_ordering(728) 00:21:23.076 fused_ordering(729) 00:21:23.076 fused_ordering(730) 00:21:23.076 fused_ordering(731) 00:21:23.076 fused_ordering(732) 00:21:23.076 fused_ordering(733) 00:21:23.076 fused_ordering(734) 00:21:23.076 fused_ordering(735) 00:21:23.076 fused_ordering(736) 00:21:23.076 fused_ordering(737) 00:21:23.076 fused_ordering(738) 00:21:23.076 fused_ordering(739) 00:21:23.076 fused_ordering(740) 00:21:23.076 fused_ordering(741) 00:21:23.076 fused_ordering(742) 00:21:23.076 fused_ordering(743) 00:21:23.076 fused_ordering(744) 00:21:23.076 fused_ordering(745) 00:21:23.076 fused_ordering(746) 00:21:23.076 fused_ordering(747) 00:21:23.076 fused_ordering(748) 00:21:23.076 fused_ordering(749) 00:21:23.076 fused_ordering(750) 00:21:23.076 fused_ordering(751) 00:21:23.076 fused_ordering(752) 00:21:23.076 fused_ordering(753) 00:21:23.076 fused_ordering(754) 00:21:23.076 fused_ordering(755) 00:21:23.076 fused_ordering(756) 00:21:23.076 fused_ordering(757) 00:21:23.076 fused_ordering(758) 00:21:23.076 fused_ordering(759) 00:21:23.076 fused_ordering(760) 00:21:23.076 fused_ordering(761) 00:21:23.076 fused_ordering(762) 00:21:23.076 fused_ordering(763) 00:21:23.076 fused_ordering(764) 00:21:23.076 fused_ordering(765) 00:21:23.076 fused_ordering(766) 00:21:23.076 fused_ordering(767) 00:21:23.076 fused_ordering(768) 00:21:23.076 fused_ordering(769) 00:21:23.076 fused_ordering(770) 00:21:23.076 fused_ordering(771) 00:21:23.076 fused_ordering(772) 00:21:23.076 fused_ordering(773) 00:21:23.076 fused_ordering(774) 00:21:23.076 fused_ordering(775) 00:21:23.076 fused_ordering(776) 00:21:23.076 fused_ordering(777) 00:21:23.076 fused_ordering(778) 00:21:23.076 fused_ordering(779) 00:21:23.076 fused_ordering(780) 00:21:23.076 fused_ordering(781) 00:21:23.076 fused_ordering(782) 00:21:23.076 fused_ordering(783) 00:21:23.076 fused_ordering(784) 00:21:23.076 fused_ordering(785) 00:21:23.076 fused_ordering(786) 00:21:23.076 fused_ordering(787) 00:21:23.076 fused_ordering(788) 00:21:23.076 fused_ordering(789) 00:21:23.076 fused_ordering(790) 00:21:23.076 fused_ordering(791) 00:21:23.076 fused_ordering(792) 00:21:23.076 fused_ordering(793) 00:21:23.076 fused_ordering(794) 00:21:23.076 fused_ordering(795) 00:21:23.076 fused_ordering(796) 00:21:23.076 fused_ordering(797) 00:21:23.076 fused_ordering(798) 00:21:23.076 fused_ordering(799) 00:21:23.076 fused_ordering(800) 00:21:23.076 fused_ordering(801) 00:21:23.076 fused_ordering(802) 00:21:23.076 fused_ordering(803) 00:21:23.076 fused_ordering(804) 00:21:23.076 fused_ordering(805) 00:21:23.076 fused_ordering(806) 00:21:23.076 fused_ordering(807) 00:21:23.076 fused_ordering(808) 00:21:23.076 fused_ordering(809) 00:21:23.076 fused_ordering(810) 00:21:23.076 fused_ordering(811) 00:21:23.076 fused_ordering(812) 00:21:23.076 fused_ordering(813) 00:21:23.076 fused_ordering(814) 00:21:23.076 fused_ordering(815) 00:21:23.076 fused_ordering(816) 00:21:23.076 fused_ordering(817) 00:21:23.076 fused_ordering(818) 00:21:23.076 fused_ordering(819) 00:21:23.076 fused_ordering(820) 00:21:23.641 fused_ordering(821) 00:21:23.641 fused_ordering(822) 00:21:23.641 fused_ordering(823) 00:21:23.641 fused_ordering(824) 00:21:23.641 fused_ordering(825) 00:21:23.641 fused_ordering(826) 00:21:23.641 fused_ordering(827) 00:21:23.641 fused_ordering(828) 00:21:23.641 fused_ordering(829) 00:21:23.641 fused_ordering(830) 00:21:23.641 fused_ordering(831) 00:21:23.641 fused_ordering(832) 00:21:23.641 fused_ordering(833) 00:21:23.641 fused_ordering(834) 00:21:23.641 fused_ordering(835) 00:21:23.641 fused_ordering(836) 00:21:23.641 fused_ordering(837) 00:21:23.641 fused_ordering(838) 00:21:23.641 fused_ordering(839) 00:21:23.641 fused_ordering(840) 00:21:23.641 fused_ordering(841) 00:21:23.641 fused_ordering(842) 00:21:23.641 fused_ordering(843) 00:21:23.641 fused_ordering(844) 00:21:23.641 fused_ordering(845) 00:21:23.641 fused_ordering(846) 00:21:23.641 fused_ordering(847) 00:21:23.641 fused_ordering(848) 00:21:23.641 fused_ordering(849) 00:21:23.641 fused_ordering(850) 00:21:23.641 fused_ordering(851) 00:21:23.641 fused_ordering(852) 00:21:23.641 fused_ordering(853) 00:21:23.641 fused_ordering(854) 00:21:23.641 fused_ordering(855) 00:21:23.641 fused_ordering(856) 00:21:23.641 fused_ordering(857) 00:21:23.641 fused_ordering(858) 00:21:23.641 fused_ordering(859) 00:21:23.641 fused_ordering(860) 00:21:23.641 fused_ordering(861) 00:21:23.641 fused_ordering(862) 00:21:23.641 fused_ordering(863) 00:21:23.641 fused_ordering(864) 00:21:23.641 fused_ordering(865) 00:21:23.641 fused_ordering(866) 00:21:23.641 fused_ordering(867) 00:21:23.641 fused_ordering(868) 00:21:23.641 fused_ordering(869) 00:21:23.641 fused_ordering(870) 00:21:23.641 fused_ordering(871) 00:21:23.641 fused_ordering(872) 00:21:23.641 fused_ordering(873) 00:21:23.641 fused_ordering(874) 00:21:23.641 fused_ordering(875) 00:21:23.641 fused_ordering(876) 00:21:23.641 fused_ordering(877) 00:21:23.641 fused_ordering(878) 00:21:23.641 fused_ordering(879) 00:21:23.641 fused_ordering(880) 00:21:23.641 fused_ordering(881) 00:21:23.641 fused_ordering(882) 00:21:23.641 fused_ordering(883) 00:21:23.641 fused_ordering(884) 00:21:23.641 fused_ordering(885) 00:21:23.641 fused_ordering(886) 00:21:23.641 fused_ordering(887) 00:21:23.641 fused_ordering(888) 00:21:23.641 fused_ordering(889) 00:21:23.641 fused_ordering(890) 00:21:23.641 fused_ordering(891) 00:21:23.641 fused_ordering(892) 00:21:23.641 fused_ordering(893) 00:21:23.641 fused_ordering(894) 00:21:23.641 fused_ordering(895) 00:21:23.641 fused_ordering(896) 00:21:23.641 fused_ordering(897) 00:21:23.641 fused_ordering(898) 00:21:23.641 fused_ordering(899) 00:21:23.641 fused_ordering(900) 00:21:23.641 fused_ordering(901) 00:21:23.641 fused_ordering(902) 00:21:23.641 fused_ordering(903) 00:21:23.641 fused_ordering(904) 00:21:23.641 fused_ordering(905) 00:21:23.642 fused_ordering(906) 00:21:23.642 fused_ordering(907) 00:21:23.642 fused_ordering(908) 00:21:23.642 fused_ordering(909) 00:21:23.642 fused_ordering(910) 00:21:23.642 fused_ordering(911) 00:21:23.642 fused_ordering(912) 00:21:23.642 fused_ordering(913) 00:21:23.642 fused_ordering(914) 00:21:23.642 fused_ordering(915) 00:21:23.642 fused_ordering(916) 00:21:23.642 fused_ordering(917) 00:21:23.642 fused_ordering(918) 00:21:23.642 fused_ordering(919) 00:21:23.642 fused_ordering(920) 00:21:23.642 fused_ordering(921) 00:21:23.642 fused_ordering(922) 00:21:23.642 fused_ordering(923) 00:21:23.642 fused_ordering(924) 00:21:23.642 fused_ordering(925) 00:21:23.642 fused_ordering(926) 00:21:23.642 fused_ordering(927) 00:21:23.642 fused_ordering(928) 00:21:23.642 fused_ordering(929) 00:21:23.642 fused_ordering(930) 00:21:23.642 fused_ordering(931) 00:21:23.642 fused_ordering(932) 00:21:23.642 fused_ordering(933) 00:21:23.642 fused_ordering(934) 00:21:23.642 fused_ordering(935) 00:21:23.642 fused_ordering(936) 00:21:23.642 fused_ordering(937) 00:21:23.642 fused_ordering(938) 00:21:23.642 fused_ordering(939) 00:21:23.642 fused_ordering(940) 00:21:23.642 fused_ordering(941) 00:21:23.642 fused_ordering(942) 00:21:23.642 fused_ordering(943) 00:21:23.642 fused_ordering(944) 00:21:23.642 fused_ordering(945) 00:21:23.642 fused_ordering(946) 00:21:23.642 fused_ordering(947) 00:21:23.642 fused_ordering(948) 00:21:23.642 fused_ordering(949) 00:21:23.642 fused_ordering(950) 00:21:23.642 fused_ordering(951) 00:21:23.642 fused_ordering(952) 00:21:23.642 fused_ordering(953) 00:21:23.642 fused_ordering(954) 00:21:23.642 fused_ordering(955) 00:21:23.642 fused_ordering(956) 00:21:23.642 fused_ordering(957) 00:21:23.642 fused_ordering(958) 00:21:23.642 fused_ordering(959) 00:21:23.642 fused_ordering(960) 00:21:23.642 fused_ordering(961) 00:21:23.642 fused_ordering(962) 00:21:23.642 fused_ordering(963) 00:21:23.642 fused_ordering(964) 00:21:23.642 fused_ordering(965) 00:21:23.642 fused_ordering(966) 00:21:23.642 fused_ordering(967) 00:21:23.642 fused_ordering(968) 00:21:23.642 fused_ordering(969) 00:21:23.642 fused_ordering(970) 00:21:23.642 fused_ordering(971) 00:21:23.642 fused_ordering(972) 00:21:23.642 fused_ordering(973) 00:21:23.642 fused_ordering(974) 00:21:23.642 fused_ordering(975) 00:21:23.642 fused_ordering(976) 00:21:23.642 fused_ordering(977) 00:21:23.642 fused_ordering(978) 00:21:23.642 fused_ordering(979) 00:21:23.642 fused_ordering(980) 00:21:23.642 fused_ordering(981) 00:21:23.642 fused_ordering(982) 00:21:23.642 fused_ordering(983) 00:21:23.642 fused_ordering(984) 00:21:23.642 fused_ordering(985) 00:21:23.642 fused_ordering(986) 00:21:23.642 fused_ordering(987) 00:21:23.642 fused_ordering(988) 00:21:23.642 fused_ordering(989) 00:21:23.642 fused_ordering(990) 00:21:23.642 fused_ordering(991) 00:21:23.642 fused_ordering(992) 00:21:23.642 fused_ordering(993) 00:21:23.642 fused_ordering(994) 00:21:23.642 fused_ordering(995) 00:21:23.642 fused_ordering(996) 00:21:23.642 fused_ordering(997) 00:21:23.642 fused_ordering(998) 00:21:23.642 fused_ordering(999) 00:21:23.642 fused_ordering(1000) 00:21:23.642 fused_ordering(1001) 00:21:23.642 fused_ordering(1002) 00:21:23.642 fused_ordering(1003) 00:21:23.642 fused_ordering(1004) 00:21:23.642 fused_ordering(1005) 00:21:23.642 fused_ordering(1006) 00:21:23.642 fused_ordering(1007) 00:21:23.642 fused_ordering(1008) 00:21:23.642 fused_ordering(1009) 00:21:23.642 fused_ordering(1010) 00:21:23.642 fused_ordering(1011) 00:21:23.642 fused_ordering(1012) 00:21:23.642 fused_ordering(1013) 00:21:23.642 fused_ordering(1014) 00:21:23.642 fused_ordering(1015) 00:21:23.642 fused_ordering(1016) 00:21:23.642 fused_ordering(1017) 00:21:23.642 fused_ordering(1018) 00:21:23.642 fused_ordering(1019) 00:21:23.642 fused_ordering(1020) 00:21:23.642 fused_ordering(1021) 00:21:23.642 fused_ordering(1022) 00:21:23.642 fused_ordering(1023) 00:21:23.642 15:39:53 -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:21:23.642 15:39:53 -- target/fused_ordering.sh@25 -- # nvmftestfini 00:21:23.642 15:39:53 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:23.642 15:39:53 -- nvmf/common.sh@117 -- # sync 00:21:23.900 15:39:53 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:23.900 15:39:53 -- nvmf/common.sh@120 -- # set +e 00:21:23.900 15:39:53 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:23.900 15:39:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:23.900 rmmod nvme_tcp 00:21:23.900 rmmod nvme_fabrics 00:21:23.900 rmmod nvme_keyring 00:21:23.900 15:39:54 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:23.900 15:39:54 -- nvmf/common.sh@124 -- # set -e 00:21:23.900 15:39:54 -- nvmf/common.sh@125 -- # return 0 00:21:23.900 15:39:54 -- nvmf/common.sh@478 -- # '[' -n 70233 ']' 00:21:23.900 15:39:54 -- nvmf/common.sh@479 -- # killprocess 70233 00:21:23.900 15:39:54 -- common/autotest_common.sh@936 -- # '[' -z 70233 ']' 00:21:23.900 15:39:54 -- common/autotest_common.sh@940 -- # kill -0 70233 00:21:23.900 15:39:54 -- common/autotest_common.sh@941 -- # uname 00:21:23.900 15:39:54 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:23.900 15:39:54 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70233 00:21:23.900 killing process with pid 70233 00:21:23.900 15:39:54 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:23.900 15:39:54 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:23.900 15:39:54 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70233' 00:21:23.900 15:39:54 -- common/autotest_common.sh@955 -- # kill 70233 00:21:23.900 15:39:54 -- common/autotest_common.sh@960 -- # wait 70233 00:21:24.159 15:39:54 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:24.159 15:39:54 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:24.159 15:39:54 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:24.159 15:39:54 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:24.159 15:39:54 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:24.159 15:39:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:24.159 15:39:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:24.159 15:39:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:24.417 15:39:54 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:24.417 00:21:24.417 real 0m4.431s 00:21:24.417 user 0m5.301s 00:21:24.417 sys 0m1.391s 00:21:24.417 15:39:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:24.417 15:39:54 -- common/autotest_common.sh@10 -- # set +x 00:21:24.417 ************************************ 00:21:24.417 END TEST nvmf_fused_ordering 00:21:24.417 ************************************ 00:21:24.417 15:39:54 -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:21:24.417 15:39:54 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:24.417 15:39:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:24.417 15:39:54 -- common/autotest_common.sh@10 -- # set +x 00:21:24.417 ************************************ 00:21:24.417 START TEST nvmf_delete_subsystem 00:21:24.417 ************************************ 00:21:24.417 15:39:54 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:21:24.417 * Looking for test storage... 00:21:24.417 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:24.417 15:39:54 -- target/delete_subsystem.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:24.417 15:39:54 -- nvmf/common.sh@7 -- # uname -s 00:21:24.418 15:39:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:24.418 15:39:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:24.418 15:39:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:24.418 15:39:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:24.418 15:39:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:24.418 15:39:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:24.418 15:39:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:24.418 15:39:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:24.418 15:39:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:24.418 15:39:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:24.418 15:39:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:21:24.418 15:39:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:21:24.418 15:39:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:24.418 15:39:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:24.418 15:39:54 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:24.418 15:39:54 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:24.418 15:39:54 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:24.682 15:39:54 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:24.682 15:39:54 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:24.682 15:39:54 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:24.682 15:39:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.682 15:39:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.682 15:39:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.682 15:39:54 -- paths/export.sh@5 -- # export PATH 00:21:24.682 15:39:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:24.682 15:39:54 -- nvmf/common.sh@47 -- # : 0 00:21:24.682 15:39:54 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:24.682 15:39:54 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:24.682 15:39:54 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:24.682 15:39:54 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:24.682 15:39:54 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:24.682 15:39:54 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:24.682 15:39:54 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:24.682 15:39:54 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:24.682 15:39:54 -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:21:24.682 15:39:54 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:24.683 15:39:54 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:24.683 15:39:54 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:24.683 15:39:54 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:24.683 15:39:54 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:24.683 15:39:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:24.683 15:39:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:24.683 15:39:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:24.683 15:39:54 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:21:24.683 15:39:54 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:21:24.683 15:39:54 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:21:24.683 15:39:54 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:21:24.683 15:39:54 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:21:24.683 15:39:54 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:21:24.683 15:39:54 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:24.683 15:39:54 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:24.683 15:39:54 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:24.683 15:39:54 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:24.683 15:39:54 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:24.683 15:39:54 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:24.683 15:39:54 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:24.683 15:39:54 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:24.683 15:39:54 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:24.683 15:39:54 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:24.683 15:39:54 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:24.683 15:39:54 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:24.683 15:39:54 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:24.683 15:39:54 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:24.683 Cannot find device "nvmf_tgt_br" 00:21:24.683 15:39:54 -- nvmf/common.sh@155 -- # true 00:21:24.683 15:39:54 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:24.683 Cannot find device "nvmf_tgt_br2" 00:21:24.683 15:39:54 -- nvmf/common.sh@156 -- # true 00:21:24.683 15:39:54 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:24.683 15:39:54 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:24.683 Cannot find device "nvmf_tgt_br" 00:21:24.683 15:39:54 -- nvmf/common.sh@158 -- # true 00:21:24.683 15:39:54 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:24.683 Cannot find device "nvmf_tgt_br2" 00:21:24.683 15:39:54 -- nvmf/common.sh@159 -- # true 00:21:24.683 15:39:54 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:24.683 15:39:54 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:24.683 15:39:54 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:24.683 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:24.683 15:39:54 -- nvmf/common.sh@162 -- # true 00:21:24.683 15:39:54 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:24.683 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:24.683 15:39:54 -- nvmf/common.sh@163 -- # true 00:21:24.683 15:39:54 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:24.683 15:39:54 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:24.683 15:39:54 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:24.683 15:39:54 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:24.683 15:39:54 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:24.683 15:39:54 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:24.683 15:39:54 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:24.683 15:39:54 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:24.683 15:39:54 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:24.683 15:39:54 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:24.683 15:39:54 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:24.683 15:39:54 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:24.683 15:39:54 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:24.683 15:39:54 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:24.683 15:39:54 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:24.683 15:39:54 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:24.953 15:39:54 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:24.953 15:39:54 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:24.953 15:39:54 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:24.953 15:39:54 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:24.953 15:39:55 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:24.953 15:39:55 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:24.953 15:39:55 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:24.953 15:39:55 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:24.953 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:24.953 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.098 ms 00:21:24.953 00:21:24.953 --- 10.0.0.2 ping statistics --- 00:21:24.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:24.953 rtt min/avg/max/mdev = 0.098/0.098/0.098/0.000 ms 00:21:24.953 15:39:55 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:24.953 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:24.953 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:21:24.953 00:21:24.953 --- 10.0.0.3 ping statistics --- 00:21:24.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:24.953 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:21:24.953 15:39:55 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:24.953 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:24.953 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.021 ms 00:21:24.953 00:21:24.953 --- 10.0.0.1 ping statistics --- 00:21:24.953 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:24.953 rtt min/avg/max/mdev = 0.021/0.021/0.021/0.000 ms 00:21:24.953 15:39:55 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:24.953 15:39:55 -- nvmf/common.sh@422 -- # return 0 00:21:24.953 15:39:55 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:24.953 15:39:55 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:24.953 15:39:55 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:24.953 15:39:55 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:24.953 15:39:55 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:24.953 15:39:55 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:24.953 15:39:55 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:24.953 15:39:55 -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:21:24.953 15:39:55 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:24.953 15:39:55 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:24.953 15:39:55 -- common/autotest_common.sh@10 -- # set +x 00:21:24.953 15:39:55 -- nvmf/common.sh@470 -- # nvmfpid=70511 00:21:24.953 15:39:55 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:24.953 15:39:55 -- nvmf/common.sh@471 -- # waitforlisten 70511 00:21:24.953 15:39:55 -- common/autotest_common.sh@817 -- # '[' -z 70511 ']' 00:21:24.953 15:39:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:24.953 15:39:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:24.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:24.953 15:39:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:24.953 15:39:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:24.953 15:39:55 -- common/autotest_common.sh@10 -- # set +x 00:21:24.953 [2024-04-26 15:39:55.114245] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:21:24.953 [2024-04-26 15:39:55.114326] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:25.223 [2024-04-26 15:39:55.244207] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:25.223 [2024-04-26 15:39:55.362170] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:25.223 [2024-04-26 15:39:55.362427] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:25.223 [2024-04-26 15:39:55.362558] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:25.223 [2024-04-26 15:39:55.362612] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:25.223 [2024-04-26 15:39:55.362642] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:25.223 [2024-04-26 15:39:55.362837] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:25.223 [2024-04-26 15:39:55.362846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:25.840 15:39:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:25.840 15:39:56 -- common/autotest_common.sh@850 -- # return 0 00:21:25.840 15:39:56 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:25.840 15:39:56 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:25.840 15:39:56 -- common/autotest_common.sh@10 -- # set +x 00:21:25.840 15:39:56 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:25.840 15:39:56 -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:25.840 15:39:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:25.840 15:39:56 -- common/autotest_common.sh@10 -- # set +x 00:21:25.840 [2024-04-26 15:39:56.076085] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:25.840 15:39:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:25.840 15:39:56 -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:21:25.840 15:39:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:25.840 15:39:56 -- common/autotest_common.sh@10 -- # set +x 00:21:25.840 15:39:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:25.840 15:39:56 -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:25.840 15:39:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:25.840 15:39:56 -- common/autotest_common.sh@10 -- # set +x 00:21:25.840 [2024-04-26 15:39:56.092257] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:25.840 15:39:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:25.840 15:39:56 -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:21:25.840 15:39:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:25.840 15:39:56 -- common/autotest_common.sh@10 -- # set +x 00:21:25.840 NULL1 00:21:25.840 15:39:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:25.840 15:39:56 -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:21:25.840 15:39:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:25.840 15:39:56 -- common/autotest_common.sh@10 -- # set +x 00:21:25.840 Delay0 00:21:25.840 15:39:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:25.840 15:39:56 -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:25.840 15:39:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:25.840 15:39:56 -- common/autotest_common.sh@10 -- # set +x 00:21:25.840 15:39:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:25.840 15:39:56 -- target/delete_subsystem.sh@28 -- # perf_pid=70563 00:21:25.840 15:39:56 -- target/delete_subsystem.sh@30 -- # sleep 2 00:21:25.840 15:39:56 -- target/delete_subsystem.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:21:26.099 [2024-04-26 15:39:56.276967] subsystem.c:1435:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:27.999 15:39:58 -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:27.999 15:39:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:27.999 15:39:58 -- common/autotest_common.sh@10 -- # set +x 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 starting I/O failed: -6 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 starting I/O failed: -6 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 Write completed with error (sct=0, sc=8) 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 Write completed with error (sct=0, sc=8) 00:21:28.263 starting I/O failed: -6 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 Write completed with error (sct=0, sc=8) 00:21:28.263 Write completed with error (sct=0, sc=8) 00:21:28.263 starting I/O failed: -6 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 starting I/O failed: -6 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 Write completed with error (sct=0, sc=8) 00:21:28.263 starting I/O failed: -6 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 Write completed with error (sct=0, sc=8) 00:21:28.263 Write completed with error (sct=0, sc=8) 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 starting I/O failed: -6 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 starting I/O failed: -6 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 Write completed with error (sct=0, sc=8) 00:21:28.263 starting I/O failed: -6 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 starting I/O failed: -6 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 Write completed with error (sct=0, sc=8) 00:21:28.263 Write completed with error (sct=0, sc=8) 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 starting I/O failed: -6 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 starting I/O failed: -6 00:21:28.263 starting I/O failed: -6 00:21:28.263 starting I/O failed: -6 00:21:28.263 Write completed with error (sct=0, sc=8) 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 starting I/O failed: -6 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 Write completed with error (sct=0, sc=8) 00:21:28.263 starting I/O failed: -6 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 Write completed with error (sct=0, sc=8) 00:21:28.263 starting I/O failed: -6 00:21:28.263 Write completed with error (sct=0, sc=8) 00:21:28.263 Write completed with error (sct=0, sc=8) 00:21:28.263 starting I/O failed: -6 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 starting I/O failed: -6 00:21:28.263 Write completed with error (sct=0, sc=8) 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 starting I/O failed: -6 00:21:28.263 Write completed with error (sct=0, sc=8) 00:21:28.263 Write completed with error (sct=0, sc=8) 00:21:28.263 starting I/O failed: -6 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 starting I/O failed: -6 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 starting I/O failed: -6 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 starting I/O failed: -6 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 Write completed with error (sct=0, sc=8) 00:21:28.263 starting I/O failed: -6 00:21:28.263 Write completed with error (sct=0, sc=8) 00:21:28.263 Write completed with error (sct=0, sc=8) 00:21:28.263 starting I/O failed: -6 00:21:28.263 Write completed with error (sct=0, sc=8) 00:21:28.263 Write completed with error (sct=0, sc=8) 00:21:28.263 starting I/O failed: -6 00:21:28.263 Write completed with error (sct=0, sc=8) 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 starting I/O failed: -6 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 Write completed with error (sct=0, sc=8) 00:21:28.263 starting I/O failed: -6 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 starting I/O failed: -6 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 starting I/O failed: -6 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 Write completed with error (sct=0, sc=8) 00:21:28.263 starting I/O failed: -6 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 starting I/O failed: -6 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 starting I/O failed: -6 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 starting I/O failed: -6 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 Write completed with error (sct=0, sc=8) 00:21:28.263 starting I/O failed: -6 00:21:28.263 Write completed with error (sct=0, sc=8) 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 starting I/O failed: -6 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 starting I/O failed: -6 00:21:28.263 Write completed with error (sct=0, sc=8) 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 starting I/O failed: -6 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 starting I/O failed: -6 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 Write completed with error (sct=0, sc=8) 00:21:28.263 starting I/O failed: -6 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 starting I/O failed: -6 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 [2024-04-26 15:39:58.314769] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f76e0000c00 is same with the state(5) to be set 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 starting I/O failed: -6 00:21:28.263 Write completed with error (sct=0, sc=8) 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 starting I/O failed: -6 00:21:28.263 Write completed with error (sct=0, sc=8) 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 Write completed with error (sct=0, sc=8) 00:21:28.263 Write completed with error (sct=0, sc=8) 00:21:28.263 starting I/O failed: -6 00:21:28.263 Read completed with error (sct=0, sc=8) 00:21:28.263 Write completed with error (sct=0, sc=8) 00:21:28.264 Read completed with error (sct=0, sc=8) 00:21:28.264 Write completed with error (sct=0, sc=8) 00:21:28.264 starting I/O failed: -6 00:21:28.264 Write completed with error (sct=0, sc=8) 00:21:28.264 Read completed with error (sct=0, sc=8) 00:21:28.264 Read completed with error (sct=0, sc=8) 00:21:28.264 Write completed with error (sct=0, sc=8) 00:21:28.264 starting I/O failed: -6 00:21:28.264 Read completed with error (sct=0, sc=8) 00:21:28.264 Read completed with error (sct=0, sc=8) 00:21:28.264 Write completed with error (sct=0, sc=8) 00:21:28.264 Read completed with error (sct=0, sc=8) 00:21:28.264 starting I/O failed: -6 00:21:28.264 Read completed with error (sct=0, sc=8) 00:21:28.264 Write completed with error (sct=0, sc=8) 00:21:28.264 Read completed with error (sct=0, sc=8) 00:21:28.264 Read completed with error (sct=0, sc=8) 00:21:28.264 starting I/O failed: -6 00:21:28.264 Write completed with error (sct=0, sc=8) 00:21:28.264 Read completed with error (sct=0, sc=8) 00:21:28.264 Write completed with error (sct=0, sc=8) 00:21:28.264 Read completed with error (sct=0, sc=8) 00:21:28.264 starting I/O failed: -6 00:21:28.264 Read completed with error (sct=0, sc=8) 00:21:28.264 Read completed with error (sct=0, sc=8) 00:21:28.264 Read completed with error (sct=0, sc=8) 00:21:28.264 Read completed with error (sct=0, sc=8) 00:21:28.264 starting I/O failed: -6 00:21:28.264 Read completed with error (sct=0, sc=8) 00:21:28.264 Write completed with error (sct=0, sc=8) 00:21:28.264 Read completed with error (sct=0, sc=8) 00:21:28.264 Write completed with error (sct=0, sc=8) 00:21:28.264 starting I/O failed: -6 00:21:28.264 Write completed with error (sct=0, sc=8) 00:21:28.264 Write completed with error (sct=0, sc=8) 00:21:28.264 [2024-04-26 15:39:58.315334] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1032df0 is same with the state(5) to be set 00:21:28.264 Write completed with error (sct=0, sc=8) 00:21:28.264 Write completed with error (sct=0, sc=8) 00:21:28.264 Read completed with error (sct=0, sc=8) 00:21:28.264 Write completed with error (sct=0, sc=8) 00:21:28.264 Write completed with error (sct=0, sc=8) 00:21:28.264 Read completed with error (sct=0, sc=8) 00:21:28.264 Read completed with error (sct=0, sc=8) 00:21:28.264 Read completed with error (sct=0, sc=8) 00:21:28.264 Read completed with error (sct=0, sc=8) 00:21:28.264 Read completed with error (sct=0, sc=8) 00:21:28.264 Write completed with error (sct=0, sc=8) 00:21:28.264 Read completed with error (sct=0, sc=8) 00:21:28.264 Read completed with error (sct=0, sc=8) 00:21:28.264 Read completed with error (sct=0, sc=8) 00:21:28.264 Read completed with error (sct=0, sc=8) 00:21:28.264 Write completed with error (sct=0, sc=8) 00:21:28.264 Read completed with error (sct=0, sc=8) 00:21:28.264 Read completed with error (sct=0, sc=8) 00:21:28.264 Read completed with error (sct=0, sc=8) 00:21:28.264 Write completed with error (sct=0, sc=8) 00:21:28.264 Write completed with error (sct=0, sc=8) 00:21:28.264 Read completed with error (sct=0, sc=8) 00:21:28.264 Write completed with error (sct=0, sc=8) 00:21:28.264 Write completed with error (sct=0, sc=8) 00:21:28.264 Write completed with error (sct=0, sc=8) 00:21:28.264 Read completed with error (sct=0, sc=8) 00:21:28.264 Write completed with error (sct=0, sc=8) 00:21:28.264 Read completed with error (sct=0, sc=8) 00:21:28.264 Read completed with error (sct=0, sc=8) 00:21:28.264 Read completed with error (sct=0, sc=8) 00:21:28.264 Read completed with error (sct=0, sc=8) 00:21:28.264 Write completed with error (sct=0, sc=8) 00:21:28.264 Read completed with error (sct=0, sc=8) 00:21:28.264 Read completed with error (sct=0, sc=8) 00:21:28.264 Read completed with error (sct=0, sc=8) 00:21:28.264 Read completed with error (sct=0, sc=8) 00:21:28.264 Read completed with error (sct=0, sc=8) 00:21:28.264 Read completed with error (sct=0, sc=8) 00:21:28.264 Read completed with error (sct=0, sc=8) 00:21:28.264 Write completed with error (sct=0, sc=8) 00:21:28.264 Read completed with error (sct=0, sc=8) 00:21:28.264 Write completed with error (sct=0, sc=8) 00:21:28.264 Write completed with error (sct=0, sc=8) 00:21:28.264 Write completed with error (sct=0, sc=8) 00:21:28.264 Read completed with error (sct=0, sc=8) 00:21:28.264 Read completed with error (sct=0, sc=8) 00:21:28.264 Read completed with error (sct=0, sc=8) 00:21:28.264 Read completed with error (sct=0, sc=8) 00:21:28.264 Read completed with error (sct=0, sc=8) 00:21:29.261 [2024-04-26 15:39:59.293111] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1052710 is same with the state(5) to be set 00:21:29.261 Read completed with error (sct=0, sc=8) 00:21:29.261 Read completed with error (sct=0, sc=8) 00:21:29.261 Read completed with error (sct=0, sc=8) 00:21:29.261 Write completed with error (sct=0, sc=8) 00:21:29.261 Write completed with error (sct=0, sc=8) 00:21:29.261 Read completed with error (sct=0, sc=8) 00:21:29.261 Read completed with error (sct=0, sc=8) 00:21:29.261 Write completed with error (sct=0, sc=8) 00:21:29.261 Read completed with error (sct=0, sc=8) 00:21:29.261 Read completed with error (sct=0, sc=8) 00:21:29.261 Read completed with error (sct=0, sc=8) 00:21:29.261 Read completed with error (sct=0, sc=8) 00:21:29.261 Read completed with error (sct=0, sc=8) 00:21:29.261 Write completed with error (sct=0, sc=8) 00:21:29.261 Read completed with error (sct=0, sc=8) 00:21:29.261 Read completed with error (sct=0, sc=8) 00:21:29.261 Read completed with error (sct=0, sc=8) 00:21:29.261 Read completed with error (sct=0, sc=8) 00:21:29.261 Read completed with error (sct=0, sc=8) 00:21:29.261 Read completed with error (sct=0, sc=8) 00:21:29.261 Read completed with error (sct=0, sc=8) 00:21:29.261 Read completed with error (sct=0, sc=8) 00:21:29.261 Read completed with error (sct=0, sc=8) 00:21:29.261 Read completed with error (sct=0, sc=8) 00:21:29.261 Write completed with error (sct=0, sc=8) 00:21:29.261 Write completed with error (sct=0, sc=8) 00:21:29.261 Write completed with error (sct=0, sc=8) 00:21:29.261 Write completed with error (sct=0, sc=8) 00:21:29.261 Read completed with error (sct=0, sc=8) 00:21:29.261 Read completed with error (sct=0, sc=8) 00:21:29.261 Read completed with error (sct=0, sc=8) 00:21:29.261 Write completed with error (sct=0, sc=8) 00:21:29.261 Read completed with error (sct=0, sc=8) 00:21:29.261 Write completed with error (sct=0, sc=8) 00:21:29.261 Read completed with error (sct=0, sc=8) 00:21:29.261 Read completed with error (sct=0, sc=8) 00:21:29.261 Read completed with error (sct=0, sc=8) 00:21:29.261 Read completed with error (sct=0, sc=8) 00:21:29.261 Read completed with error (sct=0, sc=8) 00:21:29.261 [2024-04-26 15:39:59.313992] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f76e000bf90 is same with the state(5) to be set 00:21:29.261 Read completed with error (sct=0, sc=8) 00:21:29.261 Read completed with error (sct=0, sc=8) 00:21:29.261 Read completed with error (sct=0, sc=8) 00:21:29.261 Write completed with error (sct=0, sc=8) 00:21:29.261 Read completed with error (sct=0, sc=8) 00:21:29.261 Read completed with error (sct=0, sc=8) 00:21:29.261 Read completed with error (sct=0, sc=8) 00:21:29.261 Read completed with error (sct=0, sc=8) 00:21:29.261 Read completed with error (sct=0, sc=8) 00:21:29.261 Read completed with error (sct=0, sc=8) 00:21:29.261 Read completed with error (sct=0, sc=8) 00:21:29.261 Write completed with error (sct=0, sc=8) 00:21:29.261 Read completed with error (sct=0, sc=8) 00:21:29.261 Read completed with error (sct=0, sc=8) 00:21:29.261 Read completed with error (sct=0, sc=8) 00:21:29.261 Read completed with error (sct=0, sc=8) 00:21:29.261 Read completed with error (sct=0, sc=8) 00:21:29.261 Read completed with error (sct=0, sc=8) 00:21:29.261 Write completed with error (sct=0, sc=8) 00:21:29.261 Read completed with error (sct=0, sc=8) 00:21:29.261 Read completed with error (sct=0, sc=8) 00:21:29.261 Write completed with error (sct=0, sc=8) 00:21:29.261 Write completed with error (sct=0, sc=8) 00:21:29.261 [2024-04-26 15:39:59.314337] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1051350 is same with the state(5) to be set 00:21:29.261 Read completed with error (sct=0, sc=8) 00:21:29.261 Read completed with error (sct=0, sc=8) 00:21:29.261 Read completed with error (sct=0, sc=8) 00:21:29.261 Read completed with error (sct=0, sc=8) 00:21:29.261 Read completed with error (sct=0, sc=8) 00:21:29.261 Read completed with error (sct=0, sc=8) 00:21:29.261 Write completed with error (sct=0, sc=8) 00:21:29.261 Read completed with error (sct=0, sc=8) 00:21:29.261 Write completed with error (sct=0, sc=8) 00:21:29.261 Read completed with error (sct=0, sc=8) 00:21:29.261 Read completed with error (sct=0, sc=8) 00:21:29.261 Read completed with error (sct=0, sc=8) 00:21:29.261 Write completed with error (sct=0, sc=8) 00:21:29.261 Read completed with error (sct=0, sc=8) 00:21:29.261 Read completed with error (sct=0, sc=8) 00:21:29.261 Read completed with error (sct=0, sc=8) 00:21:29.261 Read completed with error (sct=0, sc=8) 00:21:29.261 [2024-04-26 15:39:59.314895] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x10330b0 is same with the state(5) to be set 00:21:29.261 Read completed with error (sct=0, sc=8) 00:21:29.261 Read completed with error (sct=0, sc=8) 00:21:29.261 Read completed with error (sct=0, sc=8) 00:21:29.261 Write completed with error (sct=0, sc=8) 00:21:29.261 Read completed with error (sct=0, sc=8) 00:21:29.261 Read completed with error (sct=0, sc=8) 00:21:29.261 Read completed with error (sct=0, sc=8) 00:21:29.261 Write completed with error (sct=0, sc=8) 00:21:29.261 Read completed with error (sct=0, sc=8) 00:21:29.262 Read completed with error (sct=0, sc=8) 00:21:29.262 Read completed with error (sct=0, sc=8) 00:21:29.262 Read completed with error (sct=0, sc=8) 00:21:29.262 Read completed with error (sct=0, sc=8) 00:21:29.262 Read completed with error (sct=0, sc=8) 00:21:29.262 Read completed with error (sct=0, sc=8) 00:21:29.262 Write completed with error (sct=0, sc=8) 00:21:29.262 Read completed with error (sct=0, sc=8) 00:21:29.262 Write completed with error (sct=0, sc=8) 00:21:29.262 Read completed with error (sct=0, sc=8) 00:21:29.262 Write completed with error (sct=0, sc=8) 00:21:29.262 Read completed with error (sct=0, sc=8) 00:21:29.262 Write completed with error (sct=0, sc=8) 00:21:29.262 Write completed with error (sct=0, sc=8) 00:21:29.262 Read completed with error (sct=0, sc=8) 00:21:29.262 Read completed with error (sct=0, sc=8) 00:21:29.262 Read completed with error (sct=0, sc=8) 00:21:29.262 Write completed with error (sct=0, sc=8) 00:21:29.262 Read completed with error (sct=0, sc=8) 00:21:29.262 Write completed with error (sct=0, sc=8) 00:21:29.262 Read completed with error (sct=0, sc=8) 00:21:29.262 Read completed with error (sct=0, sc=8) 00:21:29.262 Write completed with error (sct=0, sc=8) 00:21:29.262 Write completed with error (sct=0, sc=8) 00:21:29.262 Read completed with error (sct=0, sc=8) 00:21:29.262 Write completed with error (sct=0, sc=8) 00:21:29.262 Write completed with error (sct=0, sc=8) 00:21:29.262 Read completed with error (sct=0, sc=8) 00:21:29.262 Write completed with error (sct=0, sc=8) 00:21:29.262 Read completed with error (sct=0, sc=8) 00:21:29.262 [2024-04-26 15:39:59.315667] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f76e000c690 is same with the state(5) to be set 00:21:29.262 [2024-04-26 15:39:59.316625] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1052710 (9): Bad file descriptor 00:21:29.262 /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf: errors occurred 00:21:29.262 15:39:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:29.262 15:39:59 -- target/delete_subsystem.sh@34 -- # delay=0 00:21:29.262 15:39:59 -- target/delete_subsystem.sh@35 -- # kill -0 70563 00:21:29.262 15:39:59 -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:21:29.262 Initializing NVMe Controllers 00:21:29.262 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:29.262 Controller IO queue size 128, less than required. 00:21:29.262 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:29.262 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:21:29.262 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:21:29.262 Initialization complete. Launching workers. 00:21:29.262 ======================================================== 00:21:29.262 Latency(us) 00:21:29.262 Device Information : IOPS MiB/s Average min max 00:21:29.262 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 155.78 0.08 948943.43 367.04 2001890.32 00:21:29.262 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 183.56 0.09 907925.89 531.97 1011402.13 00:21:29.262 ======================================================== 00:21:29.262 Total : 339.34 0.17 926755.58 367.04 2001890.32 00:21:29.262 00:21:29.828 15:39:59 -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:21:29.828 15:39:59 -- target/delete_subsystem.sh@35 -- # kill -0 70563 00:21:29.828 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (70563) - No such process 00:21:29.828 15:39:59 -- target/delete_subsystem.sh@45 -- # NOT wait 70563 00:21:29.828 15:39:59 -- common/autotest_common.sh@638 -- # local es=0 00:21:29.828 15:39:59 -- common/autotest_common.sh@640 -- # valid_exec_arg wait 70563 00:21:29.828 15:39:59 -- common/autotest_common.sh@626 -- # local arg=wait 00:21:29.828 15:39:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:29.828 15:39:59 -- common/autotest_common.sh@630 -- # type -t wait 00:21:29.828 15:39:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:29.828 15:39:59 -- common/autotest_common.sh@641 -- # wait 70563 00:21:29.828 15:39:59 -- common/autotest_common.sh@641 -- # es=1 00:21:29.828 15:39:59 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:29.828 15:39:59 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:29.828 15:39:59 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:29.828 15:39:59 -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:21:29.828 15:39:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:29.828 15:39:59 -- common/autotest_common.sh@10 -- # set +x 00:21:29.828 15:39:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:29.828 15:39:59 -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:29.828 15:39:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:29.828 15:39:59 -- common/autotest_common.sh@10 -- # set +x 00:21:29.828 [2024-04-26 15:39:59.842886] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:29.828 15:39:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:29.828 15:39:59 -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:21:29.828 15:39:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:29.828 15:39:59 -- common/autotest_common.sh@10 -- # set +x 00:21:29.828 15:39:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:29.828 15:39:59 -- target/delete_subsystem.sh@54 -- # perf_pid=70603 00:21:29.828 15:39:59 -- target/delete_subsystem.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:21:29.828 15:39:59 -- target/delete_subsystem.sh@56 -- # delay=0 00:21:29.828 15:39:59 -- target/delete_subsystem.sh@57 -- # kill -0 70603 00:21:29.828 15:39:59 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:21:29.828 [2024-04-26 15:40:00.019777] subsystem.c:1435:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:30.086 15:40:00 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:21:30.086 15:40:00 -- target/delete_subsystem.sh@57 -- # kill -0 70603 00:21:30.086 15:40:00 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:21:30.651 15:40:00 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:21:30.651 15:40:00 -- target/delete_subsystem.sh@57 -- # kill -0 70603 00:21:30.651 15:40:00 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:21:31.218 15:40:01 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:21:31.218 15:40:01 -- target/delete_subsystem.sh@57 -- # kill -0 70603 00:21:31.218 15:40:01 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:21:31.784 15:40:01 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:21:31.784 15:40:01 -- target/delete_subsystem.sh@57 -- # kill -0 70603 00:21:31.784 15:40:01 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:21:32.352 15:40:02 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:21:32.352 15:40:02 -- target/delete_subsystem.sh@57 -- # kill -0 70603 00:21:32.352 15:40:02 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:21:32.609 15:40:02 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:21:32.609 15:40:02 -- target/delete_subsystem.sh@57 -- # kill -0 70603 00:21:32.609 15:40:02 -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:21:32.868 Initializing NVMe Controllers 00:21:32.868 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:32.868 Controller IO queue size 128, less than required. 00:21:32.868 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:32.868 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:21:32.868 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:21:32.868 Initialization complete. Launching workers. 00:21:32.868 ======================================================== 00:21:32.868 Latency(us) 00:21:32.868 Device Information : IOPS MiB/s Average min max 00:21:32.868 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003357.74 1000157.52 1010698.85 00:21:32.868 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005318.13 1000173.96 1042156.59 00:21:32.868 ======================================================== 00:21:32.868 Total : 256.00 0.12 1004337.94 1000157.52 1042156.59 00:21:32.868 00:21:33.124 15:40:03 -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:21:33.124 15:40:03 -- target/delete_subsystem.sh@57 -- # kill -0 70603 00:21:33.124 /home/vagrant/spdk_repo/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (70603) - No such process 00:21:33.124 15:40:03 -- target/delete_subsystem.sh@67 -- # wait 70603 00:21:33.124 15:40:03 -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:21:33.124 15:40:03 -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:21:33.124 15:40:03 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:33.124 15:40:03 -- nvmf/common.sh@117 -- # sync 00:21:33.124 15:40:03 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:33.124 15:40:03 -- nvmf/common.sh@120 -- # set +e 00:21:33.124 15:40:03 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:33.124 15:40:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:33.382 rmmod nvme_tcp 00:21:33.382 rmmod nvme_fabrics 00:21:33.382 rmmod nvme_keyring 00:21:33.382 15:40:03 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:33.382 15:40:03 -- nvmf/common.sh@124 -- # set -e 00:21:33.382 15:40:03 -- nvmf/common.sh@125 -- # return 0 00:21:33.382 15:40:03 -- nvmf/common.sh@478 -- # '[' -n 70511 ']' 00:21:33.382 15:40:03 -- nvmf/common.sh@479 -- # killprocess 70511 00:21:33.382 15:40:03 -- common/autotest_common.sh@936 -- # '[' -z 70511 ']' 00:21:33.382 15:40:03 -- common/autotest_common.sh@940 -- # kill -0 70511 00:21:33.382 15:40:03 -- common/autotest_common.sh@941 -- # uname 00:21:33.382 15:40:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:33.382 15:40:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70511 00:21:33.382 killing process with pid 70511 00:21:33.382 15:40:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:33.382 15:40:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:33.382 15:40:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70511' 00:21:33.382 15:40:03 -- common/autotest_common.sh@955 -- # kill 70511 00:21:33.382 15:40:03 -- common/autotest_common.sh@960 -- # wait 70511 00:21:33.640 15:40:03 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:33.640 15:40:03 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:33.640 15:40:03 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:33.640 15:40:03 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:33.640 15:40:03 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:33.640 15:40:03 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:33.640 15:40:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:33.640 15:40:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:33.640 15:40:03 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:33.640 00:21:33.640 real 0m9.189s 00:21:33.640 user 0m28.485s 00:21:33.640 sys 0m1.507s 00:21:33.640 15:40:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:33.640 15:40:03 -- common/autotest_common.sh@10 -- # set +x 00:21:33.640 ************************************ 00:21:33.640 END TEST nvmf_delete_subsystem 00:21:33.640 ************************************ 00:21:33.640 15:40:03 -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:21:33.640 15:40:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:33.640 15:40:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:33.640 15:40:03 -- common/autotest_common.sh@10 -- # set +x 00:21:33.640 ************************************ 00:21:33.641 START TEST nvmf_ns_masking 00:21:33.641 ************************************ 00:21:33.641 15:40:03 -- common/autotest_common.sh@1111 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:21:33.899 * Looking for test storage... 00:21:33.899 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:33.899 15:40:03 -- target/ns_masking.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:33.899 15:40:04 -- nvmf/common.sh@7 -- # uname -s 00:21:33.899 15:40:04 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:33.899 15:40:04 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:33.899 15:40:04 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:33.899 15:40:04 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:33.899 15:40:04 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:33.899 15:40:04 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:33.899 15:40:04 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:33.899 15:40:04 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:33.899 15:40:04 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:33.899 15:40:04 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:33.899 15:40:04 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:21:33.899 15:40:04 -- nvmf/common.sh@18 -- # NVME_HOSTID=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:21:33.899 15:40:04 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:33.899 15:40:04 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:33.899 15:40:04 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:33.899 15:40:04 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:33.899 15:40:04 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:33.899 15:40:04 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:33.899 15:40:04 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:33.899 15:40:04 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:33.899 15:40:04 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.899 15:40:04 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.899 15:40:04 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.899 15:40:04 -- paths/export.sh@5 -- # export PATH 00:21:33.899 15:40:04 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:33.899 15:40:04 -- nvmf/common.sh@47 -- # : 0 00:21:33.899 15:40:04 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:33.899 15:40:04 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:33.899 15:40:04 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:33.899 15:40:04 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:33.899 15:40:04 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:33.899 15:40:04 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:33.899 15:40:04 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:33.899 15:40:04 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:33.899 15:40:04 -- target/ns_masking.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:33.899 15:40:04 -- target/ns_masking.sh@11 -- # loops=5 00:21:33.899 15:40:04 -- target/ns_masking.sh@13 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:21:33.899 15:40:04 -- target/ns_masking.sh@14 -- # HOSTNQN=nqn.2016-06.io.spdk:host1 00:21:33.899 15:40:04 -- target/ns_masking.sh@15 -- # uuidgen 00:21:33.899 15:40:04 -- target/ns_masking.sh@15 -- # HOSTID=b5e75ec2-ae89-4f60-bd63-1ed310ce955c 00:21:33.899 15:40:04 -- target/ns_masking.sh@44 -- # nvmftestinit 00:21:33.899 15:40:04 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:33.899 15:40:04 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:33.899 15:40:04 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:33.899 15:40:04 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:33.899 15:40:04 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:33.899 15:40:04 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:33.899 15:40:04 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:33.899 15:40:04 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:33.899 15:40:04 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:21:33.899 15:40:04 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:21:33.899 15:40:04 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:21:33.899 15:40:04 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:21:33.899 15:40:04 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:21:33.899 15:40:04 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:21:33.899 15:40:04 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:33.899 15:40:04 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:33.899 15:40:04 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:33.899 15:40:04 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:33.899 15:40:04 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:33.899 15:40:04 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:33.899 15:40:04 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:33.899 15:40:04 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:33.899 15:40:04 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:33.899 15:40:04 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:33.899 15:40:04 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:33.899 15:40:04 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:33.899 15:40:04 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:33.899 15:40:04 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:33.899 Cannot find device "nvmf_tgt_br" 00:21:33.899 15:40:04 -- nvmf/common.sh@155 -- # true 00:21:33.899 15:40:04 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:33.899 Cannot find device "nvmf_tgt_br2" 00:21:33.899 15:40:04 -- nvmf/common.sh@156 -- # true 00:21:33.899 15:40:04 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:33.899 15:40:04 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:33.899 Cannot find device "nvmf_tgt_br" 00:21:33.899 15:40:04 -- nvmf/common.sh@158 -- # true 00:21:33.899 15:40:04 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:33.899 Cannot find device "nvmf_tgt_br2" 00:21:33.899 15:40:04 -- nvmf/common.sh@159 -- # true 00:21:33.899 15:40:04 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:33.899 15:40:04 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:33.899 15:40:04 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:33.899 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:33.899 15:40:04 -- nvmf/common.sh@162 -- # true 00:21:33.899 15:40:04 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:33.899 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:33.899 15:40:04 -- nvmf/common.sh@163 -- # true 00:21:33.899 15:40:04 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:33.899 15:40:04 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:33.899 15:40:04 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:34.163 15:40:04 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:34.163 15:40:04 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:34.163 15:40:04 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:34.163 15:40:04 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:34.163 15:40:04 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:34.163 15:40:04 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:34.164 15:40:04 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:34.164 15:40:04 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:34.164 15:40:04 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:34.164 15:40:04 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:34.164 15:40:04 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:34.164 15:40:04 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:34.164 15:40:04 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:34.164 15:40:04 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:34.164 15:40:04 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:34.164 15:40:04 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:34.164 15:40:04 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:34.164 15:40:04 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:34.164 15:40:04 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:34.164 15:40:04 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:34.164 15:40:04 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:34.164 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:34.164 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.088 ms 00:21:34.164 00:21:34.164 --- 10.0.0.2 ping statistics --- 00:21:34.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:34.164 rtt min/avg/max/mdev = 0.088/0.088/0.088/0.000 ms 00:21:34.164 15:40:04 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:34.164 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:34.164 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:21:34.164 00:21:34.164 --- 10.0.0.3 ping statistics --- 00:21:34.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:34.164 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:21:34.164 15:40:04 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:34.164 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:34.164 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:21:34.164 00:21:34.164 --- 10.0.0.1 ping statistics --- 00:21:34.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:34.164 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:21:34.164 15:40:04 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:34.164 15:40:04 -- nvmf/common.sh@422 -- # return 0 00:21:34.164 15:40:04 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:34.164 15:40:04 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:34.164 15:40:04 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:34.164 15:40:04 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:34.164 15:40:04 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:34.164 15:40:04 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:34.164 15:40:04 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:34.164 15:40:04 -- target/ns_masking.sh@45 -- # nvmfappstart -m 0xF 00:21:34.164 15:40:04 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:34.164 15:40:04 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:34.164 15:40:04 -- common/autotest_common.sh@10 -- # set +x 00:21:34.164 15:40:04 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:34.164 15:40:04 -- nvmf/common.sh@470 -- # nvmfpid=70850 00:21:34.164 15:40:04 -- nvmf/common.sh@471 -- # waitforlisten 70850 00:21:34.164 15:40:04 -- common/autotest_common.sh@817 -- # '[' -z 70850 ']' 00:21:34.164 15:40:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:34.164 15:40:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:34.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:34.164 15:40:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:34.164 15:40:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:34.164 15:40:04 -- common/autotest_common.sh@10 -- # set +x 00:21:34.422 [2024-04-26 15:40:04.483281] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:21:34.422 [2024-04-26 15:40:04.483392] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:34.422 [2024-04-26 15:40:04.627321] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:34.680 [2024-04-26 15:40:04.776403] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:34.680 [2024-04-26 15:40:04.776865] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:34.680 [2024-04-26 15:40:04.777102] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:34.680 [2024-04-26 15:40:04.777425] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:34.680 [2024-04-26 15:40:04.777644] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:34.680 [2024-04-26 15:40:04.777868] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:34.680 [2024-04-26 15:40:04.777982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:34.680 [2024-04-26 15:40:04.778067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:34.680 [2024-04-26 15:40:04.778072] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:35.246 15:40:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:35.246 15:40:05 -- common/autotest_common.sh@850 -- # return 0 00:21:35.246 15:40:05 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:35.246 15:40:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:35.246 15:40:05 -- common/autotest_common.sh@10 -- # set +x 00:21:35.505 15:40:05 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:35.505 15:40:05 -- target/ns_masking.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:35.762 [2024-04-26 15:40:05.822939] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:35.762 15:40:05 -- target/ns_masking.sh@49 -- # MALLOC_BDEV_SIZE=64 00:21:35.762 15:40:05 -- target/ns_masking.sh@50 -- # MALLOC_BLOCK_SIZE=512 00:21:35.762 15:40:05 -- target/ns_masking.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:21:36.020 Malloc1 00:21:36.020 15:40:06 -- target/ns_masking.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:21:36.278 Malloc2 00:21:36.278 15:40:06 -- target/ns_masking.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:21:36.536 15:40:06 -- target/ns_masking.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:21:36.793 15:40:06 -- target/ns_masking.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:37.051 [2024-04-26 15:40:07.146008] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:37.051 15:40:07 -- target/ns_masking.sh@61 -- # connect 00:21:37.051 15:40:07 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b5e75ec2-ae89-4f60-bd63-1ed310ce955c -a 10.0.0.2 -s 4420 -i 4 00:21:37.051 15:40:07 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 00:21:37.051 15:40:07 -- common/autotest_common.sh@1184 -- # local i=0 00:21:37.051 15:40:07 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:21:37.051 15:40:07 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:21:37.051 15:40:07 -- common/autotest_common.sh@1191 -- # sleep 2 00:21:39.581 15:40:09 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:21:39.581 15:40:09 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:21:39.581 15:40:09 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:21:39.581 15:40:09 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:21:39.581 15:40:09 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:21:39.581 15:40:09 -- common/autotest_common.sh@1194 -- # return 0 00:21:39.581 15:40:09 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:21:39.581 15:40:09 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:21:39.581 15:40:09 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:21:39.582 15:40:09 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:21:39.582 15:40:09 -- target/ns_masking.sh@62 -- # ns_is_visible 0x1 00:21:39.582 15:40:09 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:21:39.582 15:40:09 -- target/ns_masking.sh@39 -- # grep 0x1 00:21:39.582 [ 0]:0x1 00:21:39.582 15:40:09 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:39.582 15:40:09 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:21:39.582 15:40:09 -- target/ns_masking.sh@40 -- # nguid=59146e005b7e4004bf50fa7524cb2ffe 00:21:39.582 15:40:09 -- target/ns_masking.sh@41 -- # [[ 59146e005b7e4004bf50fa7524cb2ffe != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:39.582 15:40:09 -- target/ns_masking.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:21:39.582 15:40:09 -- target/ns_masking.sh@66 -- # ns_is_visible 0x1 00:21:39.582 15:40:09 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:21:39.582 15:40:09 -- target/ns_masking.sh@39 -- # grep 0x1 00:21:39.582 [ 0]:0x1 00:21:39.582 15:40:09 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:39.582 15:40:09 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:21:39.582 15:40:09 -- target/ns_masking.sh@40 -- # nguid=59146e005b7e4004bf50fa7524cb2ffe 00:21:39.582 15:40:09 -- target/ns_masking.sh@41 -- # [[ 59146e005b7e4004bf50fa7524cb2ffe != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:39.582 15:40:09 -- target/ns_masking.sh@67 -- # ns_is_visible 0x2 00:21:39.582 15:40:09 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:21:39.582 15:40:09 -- target/ns_masking.sh@39 -- # grep 0x2 00:21:39.582 [ 1]:0x2 00:21:39.582 15:40:09 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:21:39.582 15:40:09 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:21:39.582 15:40:09 -- target/ns_masking.sh@40 -- # nguid=f822f3ac1a7c44d3adab8eff0b909ec2 00:21:39.582 15:40:09 -- target/ns_masking.sh@41 -- # [[ f822f3ac1a7c44d3adab8eff0b909ec2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:39.582 15:40:09 -- target/ns_masking.sh@69 -- # disconnect 00:21:39.582 15:40:09 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:39.839 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:39.839 15:40:09 -- target/ns_masking.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:21:40.097 15:40:10 -- target/ns_masking.sh@74 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:21:40.356 15:40:10 -- target/ns_masking.sh@77 -- # connect 1 00:21:40.356 15:40:10 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b5e75ec2-ae89-4f60-bd63-1ed310ce955c -a 10.0.0.2 -s 4420 -i 4 00:21:40.356 15:40:10 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 1 00:21:40.356 15:40:10 -- common/autotest_common.sh@1184 -- # local i=0 00:21:40.356 15:40:10 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:21:40.356 15:40:10 -- common/autotest_common.sh@1186 -- # [[ -n 1 ]] 00:21:40.356 15:40:10 -- common/autotest_common.sh@1187 -- # nvme_device_counter=1 00:21:40.356 15:40:10 -- common/autotest_common.sh@1191 -- # sleep 2 00:21:42.287 15:40:12 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:21:42.287 15:40:12 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:21:42.287 15:40:12 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:21:42.287 15:40:12 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:21:42.287 15:40:12 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:21:42.287 15:40:12 -- common/autotest_common.sh@1194 -- # return 0 00:21:42.287 15:40:12 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:21:42.287 15:40:12 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:21:42.545 15:40:12 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:21:42.545 15:40:12 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:21:42.545 15:40:12 -- target/ns_masking.sh@78 -- # NOT ns_is_visible 0x1 00:21:42.545 15:40:12 -- common/autotest_common.sh@638 -- # local es=0 00:21:42.545 15:40:12 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:21:42.545 15:40:12 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:21:42.545 15:40:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:42.545 15:40:12 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:21:42.545 15:40:12 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:42.545 15:40:12 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:21:42.545 15:40:12 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:21:42.545 15:40:12 -- target/ns_masking.sh@39 -- # grep 0x1 00:21:42.545 15:40:12 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:42.545 15:40:12 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:21:42.545 15:40:12 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:21:42.545 15:40:12 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:42.545 15:40:12 -- common/autotest_common.sh@641 -- # es=1 00:21:42.545 15:40:12 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:42.545 15:40:12 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:42.545 15:40:12 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:42.545 15:40:12 -- target/ns_masking.sh@79 -- # ns_is_visible 0x2 00:21:42.545 15:40:12 -- target/ns_masking.sh@39 -- # grep 0x2 00:21:42.545 15:40:12 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:21:42.545 [ 0]:0x2 00:21:42.545 15:40:12 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:21:42.545 15:40:12 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:21:42.545 15:40:12 -- target/ns_masking.sh@40 -- # nguid=f822f3ac1a7c44d3adab8eff0b909ec2 00:21:42.545 15:40:12 -- target/ns_masking.sh@41 -- # [[ f822f3ac1a7c44d3adab8eff0b909ec2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:42.545 15:40:12 -- target/ns_masking.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:21:42.804 15:40:12 -- target/ns_masking.sh@83 -- # ns_is_visible 0x1 00:21:42.804 15:40:12 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:21:42.804 15:40:12 -- target/ns_masking.sh@39 -- # grep 0x1 00:21:42.804 [ 0]:0x1 00:21:42.804 15:40:12 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:21:42.804 15:40:12 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:42.804 15:40:12 -- target/ns_masking.sh@40 -- # nguid=59146e005b7e4004bf50fa7524cb2ffe 00:21:42.804 15:40:12 -- target/ns_masking.sh@41 -- # [[ 59146e005b7e4004bf50fa7524cb2ffe != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:42.804 15:40:12 -- target/ns_masking.sh@84 -- # ns_is_visible 0x2 00:21:42.804 15:40:12 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:21:42.804 15:40:12 -- target/ns_masking.sh@39 -- # grep 0x2 00:21:42.804 [ 1]:0x2 00:21:42.804 15:40:12 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:21:42.804 15:40:12 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:21:42.804 15:40:13 -- target/ns_masking.sh@40 -- # nguid=f822f3ac1a7c44d3adab8eff0b909ec2 00:21:42.804 15:40:13 -- target/ns_masking.sh@41 -- # [[ f822f3ac1a7c44d3adab8eff0b909ec2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:42.804 15:40:13 -- target/ns_masking.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:21:43.062 15:40:13 -- target/ns_masking.sh@88 -- # NOT ns_is_visible 0x1 00:21:43.062 15:40:13 -- common/autotest_common.sh@638 -- # local es=0 00:21:43.062 15:40:13 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:21:43.062 15:40:13 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:21:43.062 15:40:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:43.062 15:40:13 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:21:43.062 15:40:13 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:43.062 15:40:13 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:21:43.062 15:40:13 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:21:43.062 15:40:13 -- target/ns_masking.sh@39 -- # grep 0x1 00:21:43.062 15:40:13 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:43.062 15:40:13 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:21:43.062 15:40:13 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:21:43.062 15:40:13 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:43.062 15:40:13 -- common/autotest_common.sh@641 -- # es=1 00:21:43.062 15:40:13 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:43.062 15:40:13 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:43.062 15:40:13 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:43.062 15:40:13 -- target/ns_masking.sh@89 -- # ns_is_visible 0x2 00:21:43.062 15:40:13 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:21:43.062 15:40:13 -- target/ns_masking.sh@39 -- # grep 0x2 00:21:43.062 [ 0]:0x2 00:21:43.062 15:40:13 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:21:43.062 15:40:13 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:21:43.319 15:40:13 -- target/ns_masking.sh@40 -- # nguid=f822f3ac1a7c44d3adab8eff0b909ec2 00:21:43.319 15:40:13 -- target/ns_masking.sh@41 -- # [[ f822f3ac1a7c44d3adab8eff0b909ec2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:43.319 15:40:13 -- target/ns_masking.sh@91 -- # disconnect 00:21:43.319 15:40:13 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:43.319 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:43.319 15:40:13 -- target/ns_masking.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:21:43.576 15:40:13 -- target/ns_masking.sh@95 -- # connect 2 00:21:43.576 15:40:13 -- target/ns_masking.sh@18 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I b5e75ec2-ae89-4f60-bd63-1ed310ce955c -a 10.0.0.2 -s 4420 -i 4 00:21:43.576 15:40:13 -- target/ns_masking.sh@20 -- # waitforserial SPDKISFASTANDAWESOME 2 00:21:43.576 15:40:13 -- common/autotest_common.sh@1184 -- # local i=0 00:21:43.576 15:40:13 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:21:43.576 15:40:13 -- common/autotest_common.sh@1186 -- # [[ -n 2 ]] 00:21:43.576 15:40:13 -- common/autotest_common.sh@1187 -- # nvme_device_counter=2 00:21:43.576 15:40:13 -- common/autotest_common.sh@1191 -- # sleep 2 00:21:45.477 15:40:15 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:21:45.477 15:40:15 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:21:45.477 15:40:15 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:21:45.741 15:40:15 -- common/autotest_common.sh@1193 -- # nvme_devices=2 00:21:45.741 15:40:15 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:21:45.741 15:40:15 -- common/autotest_common.sh@1194 -- # return 0 00:21:45.741 15:40:15 -- target/ns_masking.sh@22 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:21:45.741 15:40:15 -- target/ns_masking.sh@22 -- # nvme list-subsys -o json 00:21:45.741 15:40:15 -- target/ns_masking.sh@22 -- # ctrl_id=nvme0 00:21:45.741 15:40:15 -- target/ns_masking.sh@23 -- # [[ -z nvme0 ]] 00:21:45.741 15:40:15 -- target/ns_masking.sh@96 -- # ns_is_visible 0x1 00:21:45.741 15:40:15 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:21:45.741 15:40:15 -- target/ns_masking.sh@39 -- # grep 0x1 00:21:45.741 [ 0]:0x1 00:21:45.741 15:40:15 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:45.741 15:40:15 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:21:45.741 15:40:15 -- target/ns_masking.sh@40 -- # nguid=59146e005b7e4004bf50fa7524cb2ffe 00:21:45.741 15:40:15 -- target/ns_masking.sh@41 -- # [[ 59146e005b7e4004bf50fa7524cb2ffe != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:45.741 15:40:15 -- target/ns_masking.sh@97 -- # ns_is_visible 0x2 00:21:45.741 15:40:15 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:21:45.741 15:40:15 -- target/ns_masking.sh@39 -- # grep 0x2 00:21:45.741 [ 1]:0x2 00:21:45.742 15:40:15 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:21:45.742 15:40:15 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:21:45.742 15:40:15 -- target/ns_masking.sh@40 -- # nguid=f822f3ac1a7c44d3adab8eff0b909ec2 00:21:45.742 15:40:15 -- target/ns_masking.sh@41 -- # [[ f822f3ac1a7c44d3adab8eff0b909ec2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:45.742 15:40:15 -- target/ns_masking.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:21:45.999 15:40:16 -- target/ns_masking.sh@101 -- # NOT ns_is_visible 0x1 00:21:45.999 15:40:16 -- common/autotest_common.sh@638 -- # local es=0 00:21:45.999 15:40:16 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:21:45.999 15:40:16 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:21:45.999 15:40:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:45.999 15:40:16 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:21:46.000 15:40:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:46.000 15:40:16 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:21:46.000 15:40:16 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:21:46.000 15:40:16 -- target/ns_masking.sh@39 -- # grep 0x1 00:21:46.000 15:40:16 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:46.000 15:40:16 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:21:46.000 15:40:16 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:21:46.000 15:40:16 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:46.000 15:40:16 -- common/autotest_common.sh@641 -- # es=1 00:21:46.000 15:40:16 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:46.000 15:40:16 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:46.000 15:40:16 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:46.000 15:40:16 -- target/ns_masking.sh@102 -- # ns_is_visible 0x2 00:21:46.258 15:40:16 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:21:46.258 15:40:16 -- target/ns_masking.sh@39 -- # grep 0x2 00:21:46.258 [ 0]:0x2 00:21:46.258 15:40:16 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:21:46.258 15:40:16 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:21:46.258 15:40:16 -- target/ns_masking.sh@40 -- # nguid=f822f3ac1a7c44d3adab8eff0b909ec2 00:21:46.258 15:40:16 -- target/ns_masking.sh@41 -- # [[ f822f3ac1a7c44d3adab8eff0b909ec2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:46.258 15:40:16 -- target/ns_masking.sh@105 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:21:46.258 15:40:16 -- common/autotest_common.sh@638 -- # local es=0 00:21:46.258 15:40:16 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:21:46.258 15:40:16 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:46.258 15:40:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:46.258 15:40:16 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:46.258 15:40:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:46.258 15:40:16 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:46.258 15:40:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:46.258 15:40:16 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:46.258 15:40:16 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:21:46.258 15:40:16 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:21:46.516 [2024-04-26 15:40:16.610217] nvmf_rpc.c:1779:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:21:46.516 2024/04/26 15:40:16 error on JSON-RPC call, method: nvmf_ns_remove_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 nsid:2], err: error received for nvmf_ns_remove_host method, err: Code=-32602 Msg=Invalid parameters 00:21:46.516 request: 00:21:46.516 { 00:21:46.516 "method": "nvmf_ns_remove_host", 00:21:46.516 "params": { 00:21:46.516 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:46.516 "nsid": 2, 00:21:46.516 "host": "nqn.2016-06.io.spdk:host1" 00:21:46.516 } 00:21:46.516 } 00:21:46.516 Got JSON-RPC error response 00:21:46.516 GoRPCClient: error on JSON-RPC call 00:21:46.516 15:40:16 -- common/autotest_common.sh@641 -- # es=1 00:21:46.516 15:40:16 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:46.516 15:40:16 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:46.516 15:40:16 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:46.516 15:40:16 -- target/ns_masking.sh@106 -- # NOT ns_is_visible 0x1 00:21:46.516 15:40:16 -- common/autotest_common.sh@638 -- # local es=0 00:21:46.516 15:40:16 -- common/autotest_common.sh@640 -- # valid_exec_arg ns_is_visible 0x1 00:21:46.516 15:40:16 -- common/autotest_common.sh@626 -- # local arg=ns_is_visible 00:21:46.516 15:40:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:46.516 15:40:16 -- common/autotest_common.sh@630 -- # type -t ns_is_visible 00:21:46.516 15:40:16 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:21:46.516 15:40:16 -- common/autotest_common.sh@641 -- # ns_is_visible 0x1 00:21:46.516 15:40:16 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:21:46.516 15:40:16 -- target/ns_masking.sh@39 -- # grep 0x1 00:21:46.516 15:40:16 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:21:46.516 15:40:16 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:21:46.516 15:40:16 -- target/ns_masking.sh@40 -- # nguid=00000000000000000000000000000000 00:21:46.516 15:40:16 -- target/ns_masking.sh@41 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:46.516 15:40:16 -- common/autotest_common.sh@641 -- # es=1 00:21:46.517 15:40:16 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:21:46.517 15:40:16 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:21:46.517 15:40:16 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:21:46.517 15:40:16 -- target/ns_masking.sh@107 -- # ns_is_visible 0x2 00:21:46.517 15:40:16 -- target/ns_masking.sh@39 -- # nvme list-ns /dev/nvme0 00:21:46.517 15:40:16 -- target/ns_masking.sh@39 -- # grep 0x2 00:21:46.517 [ 0]:0x2 00:21:46.517 15:40:16 -- target/ns_masking.sh@40 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:21:46.517 15:40:16 -- target/ns_masking.sh@40 -- # jq -r .nguid 00:21:46.517 15:40:16 -- target/ns_masking.sh@40 -- # nguid=f822f3ac1a7c44d3adab8eff0b909ec2 00:21:46.517 15:40:16 -- target/ns_masking.sh@41 -- # [[ f822f3ac1a7c44d3adab8eff0b909ec2 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:21:46.517 15:40:16 -- target/ns_masking.sh@108 -- # disconnect 00:21:46.517 15:40:16 -- target/ns_masking.sh@34 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:21:46.517 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:21:46.517 15:40:16 -- target/ns_masking.sh@110 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:47.082 15:40:17 -- target/ns_masking.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:21:47.082 15:40:17 -- target/ns_masking.sh@114 -- # nvmftestfini 00:21:47.082 15:40:17 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:47.082 15:40:17 -- nvmf/common.sh@117 -- # sync 00:21:47.082 15:40:17 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:47.082 15:40:17 -- nvmf/common.sh@120 -- # set +e 00:21:47.082 15:40:17 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:47.082 15:40:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:47.082 rmmod nvme_tcp 00:21:47.083 rmmod nvme_fabrics 00:21:47.083 rmmod nvme_keyring 00:21:47.083 15:40:17 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:47.083 15:40:17 -- nvmf/common.sh@124 -- # set -e 00:21:47.083 15:40:17 -- nvmf/common.sh@125 -- # return 0 00:21:47.083 15:40:17 -- nvmf/common.sh@478 -- # '[' -n 70850 ']' 00:21:47.083 15:40:17 -- nvmf/common.sh@479 -- # killprocess 70850 00:21:47.083 15:40:17 -- common/autotest_common.sh@936 -- # '[' -z 70850 ']' 00:21:47.083 15:40:17 -- common/autotest_common.sh@940 -- # kill -0 70850 00:21:47.083 15:40:17 -- common/autotest_common.sh@941 -- # uname 00:21:47.083 15:40:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:47.083 15:40:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 70850 00:21:47.083 15:40:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:21:47.083 15:40:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:21:47.083 killing process with pid 70850 00:21:47.083 15:40:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 70850' 00:21:47.083 15:40:17 -- common/autotest_common.sh@955 -- # kill 70850 00:21:47.083 15:40:17 -- common/autotest_common.sh@960 -- # wait 70850 00:21:47.340 15:40:17 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:47.340 15:40:17 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:47.340 15:40:17 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:47.340 15:40:17 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:47.340 15:40:17 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:47.340 15:40:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:47.340 15:40:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:47.340 15:40:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:47.340 15:40:17 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:47.340 00:21:47.340 real 0m13.665s 00:21:47.340 user 0m54.123s 00:21:47.340 sys 0m2.494s 00:21:47.340 15:40:17 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:47.340 ************************************ 00:21:47.340 END TEST nvmf_ns_masking 00:21:47.340 ************************************ 00:21:47.340 15:40:17 -- common/autotest_common.sh@10 -- # set +x 00:21:47.340 15:40:17 -- nvmf/nvmf.sh@37 -- # [[ 0 -eq 1 ]] 00:21:47.340 15:40:17 -- nvmf/nvmf.sh@40 -- # [[ 0 -eq 1 ]] 00:21:47.340 15:40:17 -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:21:47.340 15:40:17 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:47.340 15:40:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:47.340 15:40:17 -- common/autotest_common.sh@10 -- # set +x 00:21:47.599 ************************************ 00:21:47.599 START TEST nvmf_host_management 00:21:47.599 ************************************ 00:21:47.599 15:40:17 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:21:47.599 * Looking for test storage... 00:21:47.599 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:47.599 15:40:17 -- target/host_management.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:47.599 15:40:17 -- nvmf/common.sh@7 -- # uname -s 00:21:47.599 15:40:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:47.599 15:40:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:47.599 15:40:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:47.599 15:40:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:47.599 15:40:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:47.599 15:40:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:47.599 15:40:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:47.599 15:40:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:47.599 15:40:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:47.599 15:40:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:47.599 15:40:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:21:47.599 15:40:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:21:47.599 15:40:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:47.599 15:40:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:47.599 15:40:17 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:47.599 15:40:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:47.599 15:40:17 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:47.599 15:40:17 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:47.599 15:40:17 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:47.599 15:40:17 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:47.599 15:40:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.599 15:40:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.599 15:40:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.599 15:40:17 -- paths/export.sh@5 -- # export PATH 00:21:47.599 15:40:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:47.599 15:40:17 -- nvmf/common.sh@47 -- # : 0 00:21:47.599 15:40:17 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:47.599 15:40:17 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:47.599 15:40:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:47.599 15:40:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:47.599 15:40:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:47.599 15:40:17 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:47.599 15:40:17 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:47.599 15:40:17 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:47.599 15:40:17 -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:47.599 15:40:17 -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:47.599 15:40:17 -- target/host_management.sh@105 -- # nvmftestinit 00:21:47.599 15:40:17 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:47.599 15:40:17 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:47.599 15:40:17 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:47.599 15:40:17 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:47.599 15:40:17 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:47.599 15:40:17 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:47.599 15:40:17 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:47.599 15:40:17 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:47.599 15:40:17 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:21:47.599 15:40:17 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:21:47.599 15:40:17 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:21:47.599 15:40:17 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:21:47.599 15:40:17 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:21:47.599 15:40:17 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:21:47.599 15:40:17 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:47.599 15:40:17 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:47.599 15:40:17 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:47.599 15:40:17 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:47.599 15:40:17 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:47.599 15:40:17 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:47.599 15:40:17 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:47.599 15:40:17 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:47.599 15:40:17 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:47.599 15:40:17 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:47.599 15:40:17 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:47.599 15:40:17 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:47.599 15:40:17 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:47.599 15:40:17 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:47.599 Cannot find device "nvmf_tgt_br" 00:21:47.599 15:40:17 -- nvmf/common.sh@155 -- # true 00:21:47.599 15:40:17 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:47.599 Cannot find device "nvmf_tgt_br2" 00:21:47.599 15:40:17 -- nvmf/common.sh@156 -- # true 00:21:47.599 15:40:17 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:47.599 15:40:17 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:47.599 Cannot find device "nvmf_tgt_br" 00:21:47.599 15:40:17 -- nvmf/common.sh@158 -- # true 00:21:47.599 15:40:17 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:47.599 Cannot find device "nvmf_tgt_br2" 00:21:47.599 15:40:17 -- nvmf/common.sh@159 -- # true 00:21:47.599 15:40:17 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:47.858 15:40:17 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:47.858 15:40:17 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:47.858 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:47.858 15:40:17 -- nvmf/common.sh@162 -- # true 00:21:47.858 15:40:17 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:47.858 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:47.858 15:40:17 -- nvmf/common.sh@163 -- # true 00:21:47.858 15:40:17 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:47.858 15:40:17 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:47.858 15:40:17 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:47.858 15:40:17 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:47.858 15:40:17 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:47.858 15:40:17 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:47.858 15:40:18 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:47.858 15:40:18 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:47.858 15:40:18 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:47.858 15:40:18 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:47.858 15:40:18 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:47.858 15:40:18 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:47.858 15:40:18 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:47.858 15:40:18 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:47.858 15:40:18 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:47.858 15:40:18 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:47.858 15:40:18 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:47.858 15:40:18 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:47.858 15:40:18 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:47.858 15:40:18 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:47.858 15:40:18 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:47.858 15:40:18 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:47.858 15:40:18 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:47.858 15:40:18 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:47.858 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:47.858 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.053 ms 00:21:47.858 00:21:47.858 --- 10.0.0.2 ping statistics --- 00:21:47.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:47.858 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:21:47.858 15:40:18 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:47.858 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:47.858 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.033 ms 00:21:47.858 00:21:47.858 --- 10.0.0.3 ping statistics --- 00:21:47.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:47.858 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:21:47.858 15:40:18 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:47.858 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:47.858 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.028 ms 00:21:47.858 00:21:47.858 --- 10.0.0.1 ping statistics --- 00:21:47.858 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:47.858 rtt min/avg/max/mdev = 0.028/0.028/0.028/0.000 ms 00:21:47.858 15:40:18 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:47.858 15:40:18 -- nvmf/common.sh@422 -- # return 0 00:21:47.858 15:40:18 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:47.858 15:40:18 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:47.858 15:40:18 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:47.858 15:40:18 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:47.858 15:40:18 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:47.858 15:40:18 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:47.858 15:40:18 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:48.115 15:40:18 -- target/host_management.sh@107 -- # run_test nvmf_host_management nvmf_host_management 00:21:48.115 15:40:18 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:21:48.115 15:40:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:48.115 15:40:18 -- common/autotest_common.sh@10 -- # set +x 00:21:48.115 ************************************ 00:21:48.115 START TEST nvmf_host_management 00:21:48.115 ************************************ 00:21:48.115 15:40:18 -- common/autotest_common.sh@1111 -- # nvmf_host_management 00:21:48.115 15:40:18 -- target/host_management.sh@69 -- # starttarget 00:21:48.115 15:40:18 -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:21:48.115 15:40:18 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:48.115 15:40:18 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:48.115 15:40:18 -- common/autotest_common.sh@10 -- # set +x 00:21:48.115 15:40:18 -- nvmf/common.sh@470 -- # nvmfpid=71417 00:21:48.115 15:40:18 -- nvmf/common.sh@471 -- # waitforlisten 71417 00:21:48.115 15:40:18 -- common/autotest_common.sh@817 -- # '[' -z 71417 ']' 00:21:48.115 15:40:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:48.115 15:40:18 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:48.115 15:40:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:48.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:48.115 15:40:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:48.115 15:40:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:48.115 15:40:18 -- common/autotest_common.sh@10 -- # set +x 00:21:48.115 [2024-04-26 15:40:18.283200] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:21:48.115 [2024-04-26 15:40:18.283301] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:48.372 [2024-04-26 15:40:18.418704] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:48.372 [2024-04-26 15:40:18.536840] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:48.372 [2024-04-26 15:40:18.536899] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:48.372 [2024-04-26 15:40:18.536911] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:48.372 [2024-04-26 15:40:18.536920] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:48.373 [2024-04-26 15:40:18.536928] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:48.373 [2024-04-26 15:40:18.537369] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:48.373 [2024-04-26 15:40:18.537583] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:21:48.373 [2024-04-26 15:40:18.537738] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:21:48.373 [2024-04-26 15:40:18.537741] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:49.305 15:40:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:49.305 15:40:19 -- common/autotest_common.sh@850 -- # return 0 00:21:49.305 15:40:19 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:49.305 15:40:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:49.305 15:40:19 -- common/autotest_common.sh@10 -- # set +x 00:21:49.305 15:40:19 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:49.305 15:40:19 -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:49.305 15:40:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:49.305 15:40:19 -- common/autotest_common.sh@10 -- # set +x 00:21:49.305 [2024-04-26 15:40:19.317196] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:49.305 15:40:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:49.305 15:40:19 -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:21:49.305 15:40:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:49.305 15:40:19 -- common/autotest_common.sh@10 -- # set +x 00:21:49.305 15:40:19 -- target/host_management.sh@22 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:21:49.305 15:40:19 -- target/host_management.sh@23 -- # cat 00:21:49.305 15:40:19 -- target/host_management.sh@30 -- # rpc_cmd 00:21:49.305 15:40:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:49.305 15:40:19 -- common/autotest_common.sh@10 -- # set +x 00:21:49.305 Malloc0 00:21:49.305 [2024-04-26 15:40:19.403686] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:49.305 15:40:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:49.305 15:40:19 -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:21:49.305 15:40:19 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:49.305 15:40:19 -- common/autotest_common.sh@10 -- # set +x 00:21:49.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:49.305 15:40:19 -- target/host_management.sh@73 -- # perfpid=71495 00:21:49.305 15:40:19 -- target/host_management.sh@74 -- # waitforlisten 71495 /var/tmp/bdevperf.sock 00:21:49.305 15:40:19 -- common/autotest_common.sh@817 -- # '[' -z 71495 ']' 00:21:49.305 15:40:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:49.305 15:40:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:49.305 15:40:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:49.305 15:40:19 -- target/host_management.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:49.305 15:40:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:49.305 15:40:19 -- common/autotest_common.sh@10 -- # set +x 00:21:49.305 15:40:19 -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:21:49.305 15:40:19 -- nvmf/common.sh@521 -- # config=() 00:21:49.305 15:40:19 -- nvmf/common.sh@521 -- # local subsystem config 00:21:49.305 15:40:19 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:49.305 15:40:19 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:49.305 { 00:21:49.305 "params": { 00:21:49.305 "name": "Nvme$subsystem", 00:21:49.305 "trtype": "$TEST_TRANSPORT", 00:21:49.305 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:49.305 "adrfam": "ipv4", 00:21:49.305 "trsvcid": "$NVMF_PORT", 00:21:49.305 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:49.305 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:49.305 "hdgst": ${hdgst:-false}, 00:21:49.305 "ddgst": ${ddgst:-false} 00:21:49.305 }, 00:21:49.305 "method": "bdev_nvme_attach_controller" 00:21:49.305 } 00:21:49.305 EOF 00:21:49.305 )") 00:21:49.305 15:40:19 -- nvmf/common.sh@543 -- # cat 00:21:49.305 15:40:19 -- nvmf/common.sh@545 -- # jq . 00:21:49.305 15:40:19 -- nvmf/common.sh@546 -- # IFS=, 00:21:49.305 15:40:19 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:21:49.305 "params": { 00:21:49.305 "name": "Nvme0", 00:21:49.305 "trtype": "tcp", 00:21:49.305 "traddr": "10.0.0.2", 00:21:49.305 "adrfam": "ipv4", 00:21:49.305 "trsvcid": "4420", 00:21:49.305 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:49.305 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:49.305 "hdgst": false, 00:21:49.305 "ddgst": false 00:21:49.305 }, 00:21:49.305 "method": "bdev_nvme_attach_controller" 00:21:49.305 }' 00:21:49.305 [2024-04-26 15:40:19.509549] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:21:49.305 [2024-04-26 15:40:19.509645] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71495 ] 00:21:49.563 [2024-04-26 15:40:19.651764] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:49.563 [2024-04-26 15:40:19.778584] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:49.820 Running I/O for 10 seconds... 00:21:50.387 15:40:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:50.387 15:40:20 -- common/autotest_common.sh@850 -- # return 0 00:21:50.387 15:40:20 -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:50.387 15:40:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:50.387 15:40:20 -- common/autotest_common.sh@10 -- # set +x 00:21:50.387 15:40:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:50.387 15:40:20 -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:50.387 15:40:20 -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:21:50.387 15:40:20 -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:50.387 15:40:20 -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:21:50.387 15:40:20 -- target/host_management.sh@52 -- # local ret=1 00:21:50.387 15:40:20 -- target/host_management.sh@53 -- # local i 00:21:50.387 15:40:20 -- target/host_management.sh@54 -- # (( i = 10 )) 00:21:50.387 15:40:20 -- target/host_management.sh@54 -- # (( i != 0 )) 00:21:50.387 15:40:20 -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:21:50.387 15:40:20 -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:21:50.387 15:40:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:50.387 15:40:20 -- common/autotest_common.sh@10 -- # set +x 00:21:50.387 15:40:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:50.387 15:40:20 -- target/host_management.sh@55 -- # read_io_count=835 00:21:50.387 15:40:20 -- target/host_management.sh@58 -- # '[' 835 -ge 100 ']' 00:21:50.387 15:40:20 -- target/host_management.sh@59 -- # ret=0 00:21:50.387 15:40:20 -- target/host_management.sh@60 -- # break 00:21:50.387 15:40:20 -- target/host_management.sh@64 -- # return 0 00:21:50.387 15:40:20 -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:21:50.387 15:40:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:50.387 15:40:20 -- common/autotest_common.sh@10 -- # set +x 00:21:50.387 [2024-04-26 15:40:20.596702] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228cc60 is same with the state(5) to be set 00:21:50.387 [2024-04-26 15:40:20.596750] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228cc60 is same with the state(5) to be set 00:21:50.387 [2024-04-26 15:40:20.596762] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228cc60 is same with the state(5) to be set 00:21:50.387 [2024-04-26 15:40:20.596771] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228cc60 is same with the state(5) to be set 00:21:50.387 [2024-04-26 15:40:20.596780] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228cc60 is same with the state(5) to be set 00:21:50.387 [2024-04-26 15:40:20.596788] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228cc60 is same with the state(5) to be set 00:21:50.387 [2024-04-26 15:40:20.596797] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228cc60 is same with the state(5) to be set 00:21:50.387 [2024-04-26 15:40:20.596805] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228cc60 is same with the state(5) to be set 00:21:50.387 [2024-04-26 15:40:20.596814] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228cc60 is same with the state(5) to be set 00:21:50.387 [2024-04-26 15:40:20.596822] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228cc60 is same with the state(5) to be set 00:21:50.387 [2024-04-26 15:40:20.596830] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228cc60 is same with the state(5) to be set 00:21:50.387 [2024-04-26 15:40:20.596839] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228cc60 is same with the state(5) to be set 00:21:50.387 [2024-04-26 15:40:20.596847] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228cc60 is same with the state(5) to be set 00:21:50.387 [2024-04-26 15:40:20.596855] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228cc60 is same with the state(5) to be set 00:21:50.387 [2024-04-26 15:40:20.596863] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228cc60 is same with the state(5) to be set 00:21:50.387 [2024-04-26 15:40:20.596871] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228cc60 is same with the state(5) to be set 00:21:50.387 [2024-04-26 15:40:20.596879] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228cc60 is same with the state(5) to be set 00:21:50.387 [2024-04-26 15:40:20.596887] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228cc60 is same with the state(5) to be set 00:21:50.387 [2024-04-26 15:40:20.596895] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228cc60 is same with the state(5) to be set 00:21:50.387 [2024-04-26 15:40:20.596903] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228cc60 is same with the state(5) to be set 00:21:50.387 [2024-04-26 15:40:20.596911] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228cc60 is same with the state(5) to be set 00:21:50.387 [2024-04-26 15:40:20.596919] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228cc60 is same with the state(5) to be set 00:21:50.387 [2024-04-26 15:40:20.596933] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228cc60 is same with the state(5) to be set 00:21:50.387 [2024-04-26 15:40:20.596942] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228cc60 is same with the state(5) to be set 00:21:50.387 [2024-04-26 15:40:20.596950] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228cc60 is same with the state(5) to be set 00:21:50.387 [2024-04-26 15:40:20.596959] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228cc60 is same with the state(5) to be set 00:21:50.387 [2024-04-26 15:40:20.596967] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228cc60 is same with the state(5) to be set 00:21:50.388 [2024-04-26 15:40:20.596975] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228cc60 is same with the state(5) to be set 00:21:50.388 [2024-04-26 15:40:20.596984] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228cc60 is same with the state(5) to be set 00:21:50.388 [2024-04-26 15:40:20.596992] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228cc60 is same with the state(5) to be set 00:21:50.388 [2024-04-26 15:40:20.597000] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228cc60 is same with the state(5) to be set 00:21:50.388 [2024-04-26 15:40:20.597008] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228cc60 is same with the state(5) to be set 00:21:50.388 [2024-04-26 15:40:20.597015] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228cc60 is same with the state(5) to be set 00:21:50.388 [2024-04-26 15:40:20.597025] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228cc60 is same with the state(5) to be set 00:21:50.388 [2024-04-26 15:40:20.597039] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228cc60 is same with the state(5) to be set 00:21:50.388 [2024-04-26 15:40:20.597047] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228cc60 is same with the state(5) to be set 00:21:50.388 [2024-04-26 15:40:20.597055] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228cc60 is same with the state(5) to be set 00:21:50.388 [2024-04-26 15:40:20.597063] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228cc60 is same with the state(5) to be set 00:21:50.388 [2024-04-26 15:40:20.597071] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228cc60 is same with the state(5) to be set 00:21:50.388 [2024-04-26 15:40:20.597079] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228cc60 is same with the state(5) to be set 00:21:50.388 [2024-04-26 15:40:20.597087] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228cc60 is same with the state(5) to be set 00:21:50.388 [2024-04-26 15:40:20.597095] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228cc60 is same with the state(5) to be set 00:21:50.388 [2024-04-26 15:40:20.597103] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228cc60 is same with the state(5) to be set 00:21:50.388 [2024-04-26 15:40:20.597111] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228cc60 is same with the state(5) to be set 00:21:50.388 [2024-04-26 15:40:20.597119] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228cc60 is same with the state(5) to be set 00:21:50.388 [2024-04-26 15:40:20.597127] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228cc60 is same with the state(5) to be set 00:21:50.388 [2024-04-26 15:40:20.597145] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228cc60 is same with the state(5) to be set 00:21:50.388 [2024-04-26 15:40:20.597156] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228cc60 is same with the state(5) to be set 00:21:50.388 [2024-04-26 15:40:20.597164] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228cc60 is same with the state(5) to be set 00:21:50.388 [2024-04-26 15:40:20.597172] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228cc60 is same with the state(5) to be set 00:21:50.388 [2024-04-26 15:40:20.597180] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228cc60 is same with the state(5) to be set 00:21:50.388 [2024-04-26 15:40:20.597188] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228cc60 is same with the state(5) to be set 00:21:50.388 [2024-04-26 15:40:20.597196] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228cc60 is same with the state(5) to be set 00:21:50.388 [2024-04-26 15:40:20.597204] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228cc60 is same with the state(5) to be set 00:21:50.388 [2024-04-26 15:40:20.597214] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228cc60 is same with the state(5) to be set 00:21:50.388 [2024-04-26 15:40:20.597223] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228cc60 is same with the state(5) to be set 00:21:50.388 [2024-04-26 15:40:20.597231] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228cc60 is same with the state(5) to be set 00:21:50.388 [2024-04-26 15:40:20.597239] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228cc60 is same with the state(5) to be set 00:21:50.388 [2024-04-26 15:40:20.597247] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228cc60 is same with the state(5) to be set 00:21:50.388 [2024-04-26 15:40:20.597255] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228cc60 is same with the state(5) to be set 00:21:50.388 [2024-04-26 15:40:20.597263] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228cc60 is same with the state(5) to be set 00:21:50.388 [2024-04-26 15:40:20.597271] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228cc60 is same with the state(5) to be set 00:21:50.388 [2024-04-26 15:40:20.597279] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x228cc60 is same with the state(5) to be set 00:21:50.388 15:40:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:50.388 15:40:20 -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:21:50.388 15:40:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:21:50.388 15:40:20 -- common/autotest_common.sh@10 -- # set +x 00:21:50.388 [2024-04-26 15:40:20.608543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:50.388 [2024-04-26 15:40:20.608588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.388 [2024-04-26 15:40:20.608609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:50.388 [2024-04-26 15:40:20.608619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.388 [2024-04-26 15:40:20.608638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:50.388 [2024-04-26 15:40:20.608649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.388 [2024-04-26 15:40:20.608662] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:50.388 [2024-04-26 15:40:20.608672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.388 [2024-04-26 15:40:20.608682] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1177b00 is same with the state(5) to be set 00:21:50.388 15:40:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:21:50.388 15:40:20 -- target/host_management.sh@87 -- # sleep 1 00:21:50.388 [2024-04-26 15:40:20.619652] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1177b00 (9): Bad file descriptor 00:21:50.388 [2024-04-26 15:40:20.619794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:122496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.388 [2024-04-26 15:40:20.619811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.388 [2024-04-26 15:40:20.619836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:122624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.388 [2024-04-26 15:40:20.619847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.388 [2024-04-26 15:40:20.619859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:122752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.388 [2024-04-26 15:40:20.619868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.388 [2024-04-26 15:40:20.619880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.388 [2024-04-26 15:40:20.619890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.388 [2024-04-26 15:40:20.619902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:123008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.388 [2024-04-26 15:40:20.619912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.388 [2024-04-26 15:40:20.619923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:123136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.388 [2024-04-26 15:40:20.619932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.388 [2024-04-26 15:40:20.619943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:123264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.388 [2024-04-26 15:40:20.619953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.388 [2024-04-26 15:40:20.619964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:123392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.388 [2024-04-26 15:40:20.619974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.388 [2024-04-26 15:40:20.619985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:123520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.388 [2024-04-26 15:40:20.619994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.388 [2024-04-26 15:40:20.620006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:123648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.388 [2024-04-26 15:40:20.620015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.388 [2024-04-26 15:40:20.620026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:123776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.388 [2024-04-26 15:40:20.620035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.388 [2024-04-26 15:40:20.620046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:123904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.388 [2024-04-26 15:40:20.620055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.388 [2024-04-26 15:40:20.620067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:124032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.388 [2024-04-26 15:40:20.620088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.388 [2024-04-26 15:40:20.620100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:124160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.388 [2024-04-26 15:40:20.620109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.388 [2024-04-26 15:40:20.620120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:124288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.388 [2024-04-26 15:40:20.620130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.388 [2024-04-26 15:40:20.620155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:124416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.389 [2024-04-26 15:40:20.620165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.389 [2024-04-26 15:40:20.620177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:124544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.389 [2024-04-26 15:40:20.620188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.389 [2024-04-26 15:40:20.620200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:124672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.389 [2024-04-26 15:40:20.620209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.389 [2024-04-26 15:40:20.620221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:124800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.389 [2024-04-26 15:40:20.620231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.389 [2024-04-26 15:40:20.620243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:124928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.389 [2024-04-26 15:40:20.620252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.389 [2024-04-26 15:40:20.620264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:125056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.389 [2024-04-26 15:40:20.620273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.389 [2024-04-26 15:40:20.620285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:125184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.389 [2024-04-26 15:40:20.620294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.389 [2024-04-26 15:40:20.620305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:125312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.389 [2024-04-26 15:40:20.620315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.389 [2024-04-26 15:40:20.620326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:125440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.389 [2024-04-26 15:40:20.620335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.389 [2024-04-26 15:40:20.620346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:125568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.389 [2024-04-26 15:40:20.620356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.389 [2024-04-26 15:40:20.620368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:125696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.389 [2024-04-26 15:40:20.620378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.389 [2024-04-26 15:40:20.620389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:125824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.389 [2024-04-26 15:40:20.620398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.389 [2024-04-26 15:40:20.620410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:125952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.389 [2024-04-26 15:40:20.620419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.389 [2024-04-26 15:40:20.620431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:126080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.389 [2024-04-26 15:40:20.620445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.389 [2024-04-26 15:40:20.620457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:126208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.389 [2024-04-26 15:40:20.620466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.389 [2024-04-26 15:40:20.620477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:126336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.389 [2024-04-26 15:40:20.620486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.389 [2024-04-26 15:40:20.620498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:126464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.389 [2024-04-26 15:40:20.620507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.389 [2024-04-26 15:40:20.620518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:126592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.389 [2024-04-26 15:40:20.620527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.389 [2024-04-26 15:40:20.620538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:126720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.389 [2024-04-26 15:40:20.620547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.389 [2024-04-26 15:40:20.620559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:126848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.389 [2024-04-26 15:40:20.620568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.389 [2024-04-26 15:40:20.620579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:126976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.389 [2024-04-26 15:40:20.620588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.389 [2024-04-26 15:40:20.620601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:127104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.389 [2024-04-26 15:40:20.620610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.389 [2024-04-26 15:40:20.620622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:127232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.389 [2024-04-26 15:40:20.620643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.389 [2024-04-26 15:40:20.620662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:127360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.389 [2024-04-26 15:40:20.620671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.389 [2024-04-26 15:40:20.620683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:127488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.389 [2024-04-26 15:40:20.620692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.389 [2024-04-26 15:40:20.620704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:127616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.389 [2024-04-26 15:40:20.620713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.389 [2024-04-26 15:40:20.620724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:127744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.389 [2024-04-26 15:40:20.620733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.389 [2024-04-26 15:40:20.620744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:127872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.389 [2024-04-26 15:40:20.620753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.389 [2024-04-26 15:40:20.620764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:128000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.389 [2024-04-26 15:40:20.620773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.389 [2024-04-26 15:40:20.620784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:128128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.389 [2024-04-26 15:40:20.620799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.389 [2024-04-26 15:40:20.620811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:128256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.389 [2024-04-26 15:40:20.620821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.389 [2024-04-26 15:40:20.620832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:128384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.389 [2024-04-26 15:40:20.620841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.389 [2024-04-26 15:40:20.620853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:128512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.389 [2024-04-26 15:40:20.620862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.389 [2024-04-26 15:40:20.620873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:128640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.389 [2024-04-26 15:40:20.620882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.389 [2024-04-26 15:40:20.620893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:128768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.389 [2024-04-26 15:40:20.620902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.389 [2024-04-26 15:40:20.620914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:128896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.389 [2024-04-26 15:40:20.620924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.389 [2024-04-26 15:40:20.620935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:129024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.389 [2024-04-26 15:40:20.620944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.389 [2024-04-26 15:40:20.620956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:129152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.389 [2024-04-26 15:40:20.620965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.389 [2024-04-26 15:40:20.620976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:129280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.389 [2024-04-26 15:40:20.620985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.390 [2024-04-26 15:40:20.620996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:129408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.390 [2024-04-26 15:40:20.621005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.390 [2024-04-26 15:40:20.621016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:129536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.390 [2024-04-26 15:40:20.621025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.390 [2024-04-26 15:40:20.621036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:129664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.390 [2024-04-26 15:40:20.621045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.390 [2024-04-26 15:40:20.621056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:129792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.390 [2024-04-26 15:40:20.621066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.390 [2024-04-26 15:40:20.621076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:129920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.390 [2024-04-26 15:40:20.621086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.390 [2024-04-26 15:40:20.621097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:130048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.390 [2024-04-26 15:40:20.621106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.390 [2024-04-26 15:40:20.621118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:130176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.390 [2024-04-26 15:40:20.621132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.390 [2024-04-26 15:40:20.621160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:130304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.390 [2024-04-26 15:40:20.621170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.390 [2024-04-26 15:40:20.621181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:130432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.390 [2024-04-26 15:40:20.621191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.390 [2024-04-26 15:40:20.621201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:130560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:50.390 [2024-04-26 15:40:20.621210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:50.390 [2024-04-26 15:40:20.621308] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x11798b0 was disconnected and freed. reset controller. 00:21:50.390 task offset: 122496 on job bdev=Nvme0n1 fails 00:21:50.390 00:21:50.390 Latency(us) 00:21:50.390 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:50.390 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:50.390 Job: Nvme0n1 ended in about 0.65 seconds with error 00:21:50.390 Verification LBA range: start 0x0 length 0x400 00:21:50.390 Nvme0n1 : 0.65 1462.50 91.41 97.81 0.00 39944.52 2010.76 37891.72 00:21:50.390 =================================================================================================================== 00:21:50.390 Total : 1462.50 91.41 97.81 0.00 39944.52 2010.76 37891.72 00:21:50.390 [2024-04-26 15:40:20.622449] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:50.390 [2024-04-26 15:40:20.624888] app.c: 966:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:50.390 [2024-04-26 15:40:20.636297] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:51.326 15:40:21 -- target/host_management.sh@91 -- # kill -9 71495 00:21:51.326 /home/vagrant/spdk_repo/spdk/test/nvmf/target/host_management.sh: line 91: kill: (71495) - No such process 00:21:51.326 15:40:21 -- target/host_management.sh@91 -- # true 00:21:51.326 15:40:21 -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:21:51.326 15:40:21 -- target/host_management.sh@100 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:21:51.326 15:40:21 -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:21:51.326 15:40:21 -- nvmf/common.sh@521 -- # config=() 00:21:51.326 15:40:21 -- nvmf/common.sh@521 -- # local subsystem config 00:21:51.326 15:40:21 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:21:51.326 15:40:21 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:21:51.326 { 00:21:51.326 "params": { 00:21:51.326 "name": "Nvme$subsystem", 00:21:51.326 "trtype": "$TEST_TRANSPORT", 00:21:51.326 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:51.326 "adrfam": "ipv4", 00:21:51.326 "trsvcid": "$NVMF_PORT", 00:21:51.326 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:51.326 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:51.326 "hdgst": ${hdgst:-false}, 00:21:51.326 "ddgst": ${ddgst:-false} 00:21:51.326 }, 00:21:51.326 "method": "bdev_nvme_attach_controller" 00:21:51.326 } 00:21:51.326 EOF 00:21:51.326 )") 00:21:51.584 15:40:21 -- nvmf/common.sh@543 -- # cat 00:21:51.584 15:40:21 -- nvmf/common.sh@545 -- # jq . 00:21:51.584 15:40:21 -- nvmf/common.sh@546 -- # IFS=, 00:21:51.584 15:40:21 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:21:51.584 "params": { 00:21:51.584 "name": "Nvme0", 00:21:51.584 "trtype": "tcp", 00:21:51.584 "traddr": "10.0.0.2", 00:21:51.584 "adrfam": "ipv4", 00:21:51.584 "trsvcid": "4420", 00:21:51.584 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:51.584 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:21:51.584 "hdgst": false, 00:21:51.584 "ddgst": false 00:21:51.584 }, 00:21:51.584 "method": "bdev_nvme_attach_controller" 00:21:51.584 }' 00:21:51.584 [2024-04-26 15:40:21.672735] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:21:51.584 [2024-04-26 15:40:21.672829] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71545 ] 00:21:51.584 [2024-04-26 15:40:21.811363] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:51.842 [2024-04-26 15:40:21.936250] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:51.842 Running I/O for 1 seconds... 00:21:52.845 00:21:52.845 Latency(us) 00:21:52.846 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:52.846 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:52.846 Verification LBA range: start 0x0 length 0x400 00:21:52.846 Nvme0n1 : 1.01 1527.59 95.47 0.00 0.00 41055.12 6106.76 37653.41 00:21:52.846 =================================================================================================================== 00:21:52.846 Total : 1527.59 95.47 0.00 0.00 41055.12 6106.76 37653.41 00:21:53.104 15:40:23 -- target/host_management.sh@102 -- # stoptarget 00:21:53.104 15:40:23 -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:21:53.104 15:40:23 -- target/host_management.sh@37 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevperf.conf 00:21:53.104 15:40:23 -- target/host_management.sh@38 -- # rm -rf /home/vagrant/spdk_repo/spdk/test/nvmf/target/rpcs.txt 00:21:53.104 15:40:23 -- target/host_management.sh@40 -- # nvmftestfini 00:21:53.104 15:40:23 -- nvmf/common.sh@477 -- # nvmfcleanup 00:21:53.104 15:40:23 -- nvmf/common.sh@117 -- # sync 00:21:53.363 15:40:23 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:53.363 15:40:23 -- nvmf/common.sh@120 -- # set +e 00:21:53.363 15:40:23 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:53.363 15:40:23 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:53.363 rmmod nvme_tcp 00:21:53.363 rmmod nvme_fabrics 00:21:53.363 rmmod nvme_keyring 00:21:53.363 15:40:23 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:53.363 15:40:23 -- nvmf/common.sh@124 -- # set -e 00:21:53.363 15:40:23 -- nvmf/common.sh@125 -- # return 0 00:21:53.363 15:40:23 -- nvmf/common.sh@478 -- # '[' -n 71417 ']' 00:21:53.363 15:40:23 -- nvmf/common.sh@479 -- # killprocess 71417 00:21:53.363 15:40:23 -- common/autotest_common.sh@936 -- # '[' -z 71417 ']' 00:21:53.363 15:40:23 -- common/autotest_common.sh@940 -- # kill -0 71417 00:21:53.363 15:40:23 -- common/autotest_common.sh@941 -- # uname 00:21:53.363 15:40:23 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:21:53.363 15:40:23 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71417 00:21:53.363 15:40:23 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:21:53.363 15:40:23 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:21:53.363 killing process with pid 71417 00:21:53.363 15:40:23 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71417' 00:21:53.363 15:40:23 -- common/autotest_common.sh@955 -- # kill 71417 00:21:53.363 15:40:23 -- common/autotest_common.sh@960 -- # wait 71417 00:21:53.621 [2024-04-26 15:40:23.754430] app.c: 630:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:21:53.621 15:40:23 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:21:53.621 15:40:23 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:21:53.621 15:40:23 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:21:53.621 15:40:23 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:53.621 15:40:23 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:53.621 15:40:23 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:53.621 15:40:23 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:53.621 15:40:23 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:53.621 15:40:23 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:21:53.621 00:21:53.621 real 0m5.591s 00:21:53.622 user 0m23.612s 00:21:53.622 sys 0m1.166s 00:21:53.622 15:40:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:53.622 15:40:23 -- common/autotest_common.sh@10 -- # set +x 00:21:53.622 ************************************ 00:21:53.622 END TEST nvmf_host_management 00:21:53.622 ************************************ 00:21:53.622 15:40:23 -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:21:53.622 00:21:53.622 real 0m6.167s 00:21:53.622 user 0m23.763s 00:21:53.622 sys 0m1.428s 00:21:53.622 15:40:23 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:21:53.622 15:40:23 -- common/autotest_common.sh@10 -- # set +x 00:21:53.622 ************************************ 00:21:53.622 END TEST nvmf_host_management 00:21:53.622 ************************************ 00:21:53.622 15:40:23 -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:21:53.622 15:40:23 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:21:53.622 15:40:23 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:21:53.622 15:40:23 -- common/autotest_common.sh@10 -- # set +x 00:21:53.880 ************************************ 00:21:53.880 START TEST nvmf_lvol 00:21:53.880 ************************************ 00:21:53.880 15:40:23 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:21:53.880 * Looking for test storage... 00:21:53.880 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:21:53.880 15:40:24 -- target/nvmf_lvol.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:21:53.880 15:40:24 -- nvmf/common.sh@7 -- # uname -s 00:21:53.880 15:40:24 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:53.880 15:40:24 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:53.880 15:40:24 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:53.880 15:40:24 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:53.880 15:40:24 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:53.880 15:40:24 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:53.880 15:40:24 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:53.880 15:40:24 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:53.880 15:40:24 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:53.880 15:40:24 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:53.880 15:40:24 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:21:53.880 15:40:24 -- nvmf/common.sh@18 -- # NVME_HOSTID=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:21:53.880 15:40:24 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:53.880 15:40:24 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:53.880 15:40:24 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:21:53.880 15:40:24 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:53.880 15:40:24 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:21:53.880 15:40:24 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:53.880 15:40:24 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:53.880 15:40:24 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:53.880 15:40:24 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.880 15:40:24 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.880 15:40:24 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.880 15:40:24 -- paths/export.sh@5 -- # export PATH 00:21:53.880 15:40:24 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:53.880 15:40:24 -- nvmf/common.sh@47 -- # : 0 00:21:53.880 15:40:24 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:53.880 15:40:24 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:53.880 15:40:24 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:53.880 15:40:24 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:53.880 15:40:24 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:53.880 15:40:24 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:53.880 15:40:24 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:53.880 15:40:24 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:53.880 15:40:24 -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:53.880 15:40:24 -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:53.880 15:40:24 -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:21:53.880 15:40:24 -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:21:53.880 15:40:24 -- target/nvmf_lvol.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:21:53.880 15:40:24 -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:21:53.880 15:40:24 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:21:53.880 15:40:24 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:53.880 15:40:24 -- nvmf/common.sh@437 -- # prepare_net_devs 00:21:53.880 15:40:24 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:21:53.880 15:40:24 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:21:53.880 15:40:24 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:53.880 15:40:24 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:21:53.880 15:40:24 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:53.880 15:40:24 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:21:53.880 15:40:24 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:21:53.880 15:40:24 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:21:53.880 15:40:24 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:21:53.880 15:40:24 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:21:53.880 15:40:24 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:21:53.880 15:40:24 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:53.880 15:40:24 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:53.880 15:40:24 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:21:53.880 15:40:24 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:21:53.880 15:40:24 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:21:53.880 15:40:24 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:21:53.880 15:40:24 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:21:53.880 15:40:24 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:53.880 15:40:24 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:21:53.880 15:40:24 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:21:53.880 15:40:24 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:21:53.880 15:40:24 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:21:53.880 15:40:24 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:21:53.880 15:40:24 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:21:53.880 Cannot find device "nvmf_tgt_br" 00:21:53.880 15:40:24 -- nvmf/common.sh@155 -- # true 00:21:53.881 15:40:24 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:21:53.881 Cannot find device "nvmf_tgt_br2" 00:21:53.881 15:40:24 -- nvmf/common.sh@156 -- # true 00:21:53.881 15:40:24 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:21:53.881 15:40:24 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:21:53.881 Cannot find device "nvmf_tgt_br" 00:21:53.881 15:40:24 -- nvmf/common.sh@158 -- # true 00:21:53.881 15:40:24 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:21:53.881 Cannot find device "nvmf_tgt_br2" 00:21:53.881 15:40:24 -- nvmf/common.sh@159 -- # true 00:21:53.881 15:40:24 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:21:54.139 15:40:24 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:21:54.139 15:40:24 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:21:54.139 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:54.139 15:40:24 -- nvmf/common.sh@162 -- # true 00:21:54.139 15:40:24 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:21:54.139 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:21:54.139 15:40:24 -- nvmf/common.sh@163 -- # true 00:21:54.139 15:40:24 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:21:54.139 15:40:24 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:21:54.139 15:40:24 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:21:54.139 15:40:24 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:21:54.139 15:40:24 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:21:54.139 15:40:24 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:21:54.139 15:40:24 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:21:54.139 15:40:24 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:21:54.139 15:40:24 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:21:54.139 15:40:24 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:21:54.139 15:40:24 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:21:54.139 15:40:24 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:21:54.139 15:40:24 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:21:54.139 15:40:24 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:21:54.139 15:40:24 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:21:54.139 15:40:24 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:21:54.139 15:40:24 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:21:54.139 15:40:24 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:21:54.139 15:40:24 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:21:54.139 15:40:24 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:21:54.139 15:40:24 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:21:54.139 15:40:24 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:21:54.139 15:40:24 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:21:54.139 15:40:24 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:21:54.139 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:54.139 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.105 ms 00:21:54.139 00:21:54.139 --- 10.0.0.2 ping statistics --- 00:21:54.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:54.139 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:21:54.139 15:40:24 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:21:54.139 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:21:54.139 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.064 ms 00:21:54.139 00:21:54.139 --- 10.0.0.3 ping statistics --- 00:21:54.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:54.139 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:21:54.139 15:40:24 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:21:54.139 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:54.139 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.034 ms 00:21:54.139 00:21:54.139 --- 10.0.0.1 ping statistics --- 00:21:54.139 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:54.139 rtt min/avg/max/mdev = 0.034/0.034/0.034/0.000 ms 00:21:54.139 15:40:24 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:54.139 15:40:24 -- nvmf/common.sh@422 -- # return 0 00:21:54.139 15:40:24 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:21:54.139 15:40:24 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:54.139 15:40:24 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:21:54.139 15:40:24 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:21:54.139 15:40:24 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:54.139 15:40:24 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:21:54.139 15:40:24 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:21:54.398 15:40:24 -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:21:54.398 15:40:24 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:21:54.398 15:40:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:21:54.398 15:40:24 -- common/autotest_common.sh@10 -- # set +x 00:21:54.398 15:40:24 -- nvmf/common.sh@470 -- # nvmfpid=71773 00:21:54.398 15:40:24 -- nvmf/common.sh@471 -- # waitforlisten 71773 00:21:54.398 15:40:24 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:21:54.398 15:40:24 -- common/autotest_common.sh@817 -- # '[' -z 71773 ']' 00:21:54.398 15:40:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:54.398 15:40:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:21:54.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:54.398 15:40:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:54.398 15:40:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:21:54.398 15:40:24 -- common/autotest_common.sh@10 -- # set +x 00:21:54.398 [2024-04-26 15:40:24.489049] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:21:54.398 [2024-04-26 15:40:24.489148] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:54.398 [2024-04-26 15:40:24.627877] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:54.656 [2024-04-26 15:40:24.741748] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:54.656 [2024-04-26 15:40:24.741818] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:54.656 [2024-04-26 15:40:24.741830] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:54.656 [2024-04-26 15:40:24.741855] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:54.656 [2024-04-26 15:40:24.741869] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:54.656 [2024-04-26 15:40:24.742009] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:21:54.656 [2024-04-26 15:40:24.742118] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:21:54.656 [2024-04-26 15:40:24.742122] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:21:55.223 15:40:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:21:55.223 15:40:25 -- common/autotest_common.sh@850 -- # return 0 00:21:55.223 15:40:25 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:21:55.223 15:40:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:21:55.223 15:40:25 -- common/autotest_common.sh@10 -- # set +x 00:21:55.223 15:40:25 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:55.223 15:40:25 -- target/nvmf_lvol.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:55.481 [2024-04-26 15:40:25.768067] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:55.739 15:40:25 -- target/nvmf_lvol.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:55.998 15:40:26 -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:21:55.998 15:40:26 -- target/nvmf_lvol.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:56.255 15:40:26 -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:21:56.255 15:40:26 -- target/nvmf_lvol.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:21:56.514 15:40:26 -- target/nvmf_lvol.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:21:56.772 15:40:27 -- target/nvmf_lvol.sh@29 -- # lvs=3d64e79e-3176-4f6b-84a1-209a14001586 00:21:56.772 15:40:27 -- target/nvmf_lvol.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u 3d64e79e-3176-4f6b-84a1-209a14001586 lvol 20 00:21:57.031 15:40:27 -- target/nvmf_lvol.sh@32 -- # lvol=a19e2dfd-3f87-4f50-8e70-877936920bfc 00:21:57.031 15:40:27 -- target/nvmf_lvol.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:21:57.598 15:40:27 -- target/nvmf_lvol.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 a19e2dfd-3f87-4f50-8e70-877936920bfc 00:21:57.855 15:40:27 -- target/nvmf_lvol.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:58.114 [2024-04-26 15:40:28.224091] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:58.114 15:40:28 -- target/nvmf_lvol.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:58.373 15:40:28 -- target/nvmf_lvol.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:21:58.373 15:40:28 -- target/nvmf_lvol.sh@42 -- # perf_pid=71928 00:21:58.373 15:40:28 -- target/nvmf_lvol.sh@44 -- # sleep 1 00:21:59.362 15:40:29 -- target/nvmf_lvol.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_snapshot a19e2dfd-3f87-4f50-8e70-877936920bfc MY_SNAPSHOT 00:21:59.620 15:40:29 -- target/nvmf_lvol.sh@47 -- # snapshot=4f83e516-2c4c-4cec-a7da-ad2bc8fa2e5b 00:21:59.620 15:40:29 -- target/nvmf_lvol.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_resize a19e2dfd-3f87-4f50-8e70-877936920bfc 30 00:22:00.186 15:40:30 -- target/nvmf_lvol.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_clone 4f83e516-2c4c-4cec-a7da-ad2bc8fa2e5b MY_CLONE 00:22:00.445 15:40:30 -- target/nvmf_lvol.sh@49 -- # clone=e60ed70e-711c-47db-8bbb-823029a36926 00:22:00.445 15:40:30 -- target/nvmf_lvol.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_inflate e60ed70e-711c-47db-8bbb-823029a36926 00:22:01.023 15:40:31 -- target/nvmf_lvol.sh@53 -- # wait 71928 00:22:09.133 Initializing NVMe Controllers 00:22:09.133 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:22:09.133 Controller IO queue size 128, less than required. 00:22:09.133 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:09.133 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:22:09.133 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:22:09.133 Initialization complete. Launching workers. 00:22:09.133 ======================================================== 00:22:09.133 Latency(us) 00:22:09.133 Device Information : IOPS MiB/s Average min max 00:22:09.133 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10746.40 41.98 11919.39 2655.93 81597.61 00:22:09.133 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10947.70 42.76 11698.54 770.70 73992.00 00:22:09.133 ======================================================== 00:22:09.133 Total : 21694.10 84.74 11807.94 770.70 81597.61 00:22:09.133 00:22:09.133 15:40:38 -- target/nvmf_lvol.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:09.133 15:40:39 -- target/nvmf_lvol.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete a19e2dfd-3f87-4f50-8e70-877936920bfc 00:22:09.133 15:40:39 -- target/nvmf_lvol.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3d64e79e-3176-4f6b-84a1-209a14001586 00:22:09.391 15:40:39 -- target/nvmf_lvol.sh@60 -- # rm -f 00:22:09.391 15:40:39 -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:22:09.391 15:40:39 -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:22:09.391 15:40:39 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:09.391 15:40:39 -- nvmf/common.sh@117 -- # sync 00:22:09.649 15:40:39 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:09.649 15:40:39 -- nvmf/common.sh@120 -- # set +e 00:22:09.649 15:40:39 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:09.649 15:40:39 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:09.649 rmmod nvme_tcp 00:22:09.649 rmmod nvme_fabrics 00:22:09.649 rmmod nvme_keyring 00:22:09.649 15:40:39 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:09.649 15:40:39 -- nvmf/common.sh@124 -- # set -e 00:22:09.649 15:40:39 -- nvmf/common.sh@125 -- # return 0 00:22:09.649 15:40:39 -- nvmf/common.sh@478 -- # '[' -n 71773 ']' 00:22:09.649 15:40:39 -- nvmf/common.sh@479 -- # killprocess 71773 00:22:09.649 15:40:39 -- common/autotest_common.sh@936 -- # '[' -z 71773 ']' 00:22:09.649 15:40:39 -- common/autotest_common.sh@940 -- # kill -0 71773 00:22:09.649 15:40:39 -- common/autotest_common.sh@941 -- # uname 00:22:09.649 15:40:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:09.649 15:40:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 71773 00:22:09.649 15:40:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:09.649 killing process with pid 71773 00:22:09.649 15:40:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:09.649 15:40:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 71773' 00:22:09.649 15:40:39 -- common/autotest_common.sh@955 -- # kill 71773 00:22:09.649 15:40:39 -- common/autotest_common.sh@960 -- # wait 71773 00:22:09.907 15:40:40 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:09.907 15:40:40 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:09.907 15:40:40 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:09.907 15:40:40 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:09.907 15:40:40 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:09.907 15:40:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:09.907 15:40:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:09.907 15:40:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:10.202 15:40:40 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:10.202 ************************************ 00:22:10.202 END TEST nvmf_lvol 00:22:10.202 ************************************ 00:22:10.202 00:22:10.202 real 0m16.253s 00:22:10.202 user 1m7.607s 00:22:10.202 sys 0m3.931s 00:22:10.202 15:40:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:10.202 15:40:40 -- common/autotest_common.sh@10 -- # set +x 00:22:10.202 15:40:40 -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:22:10.202 15:40:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:10.202 15:40:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:10.202 15:40:40 -- common/autotest_common.sh@10 -- # set +x 00:22:10.202 ************************************ 00:22:10.202 START TEST nvmf_lvs_grow 00:22:10.202 ************************************ 00:22:10.202 15:40:40 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:22:10.202 * Looking for test storage... 00:22:10.202 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:10.202 15:40:40 -- target/nvmf_lvs_grow.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:10.202 15:40:40 -- nvmf/common.sh@7 -- # uname -s 00:22:10.202 15:40:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:10.202 15:40:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:10.202 15:40:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:10.202 15:40:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:10.202 15:40:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:10.202 15:40:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:10.202 15:40:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:10.202 15:40:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:10.202 15:40:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:10.202 15:40:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:10.202 15:40:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:22:10.202 15:40:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:22:10.202 15:40:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:10.202 15:40:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:10.202 15:40:40 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:10.202 15:40:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:10.202 15:40:40 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:10.202 15:40:40 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:10.202 15:40:40 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:10.202 15:40:40 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:10.202 15:40:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.202 15:40:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.202 15:40:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.203 15:40:40 -- paths/export.sh@5 -- # export PATH 00:22:10.203 15:40:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:10.203 15:40:40 -- nvmf/common.sh@47 -- # : 0 00:22:10.203 15:40:40 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:10.203 15:40:40 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:10.203 15:40:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:10.203 15:40:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:10.203 15:40:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:10.203 15:40:40 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:10.203 15:40:40 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:10.203 15:40:40 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:10.203 15:40:40 -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:10.203 15:40:40 -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:10.203 15:40:40 -- target/nvmf_lvs_grow.sh@97 -- # nvmftestinit 00:22:10.203 15:40:40 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:10.203 15:40:40 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:10.203 15:40:40 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:10.203 15:40:40 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:10.203 15:40:40 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:10.203 15:40:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:10.203 15:40:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:10.203 15:40:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:10.203 15:40:40 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:22:10.203 15:40:40 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:22:10.203 15:40:40 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:22:10.203 15:40:40 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:22:10.203 15:40:40 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:22:10.203 15:40:40 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:22:10.203 15:40:40 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:10.203 15:40:40 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:10.203 15:40:40 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:10.203 15:40:40 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:10.203 15:40:40 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:10.203 15:40:40 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:10.203 15:40:40 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:10.203 15:40:40 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:10.203 15:40:40 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:10.203 15:40:40 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:10.203 15:40:40 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:10.203 15:40:40 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:10.203 15:40:40 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:10.203 15:40:40 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:10.203 Cannot find device "nvmf_tgt_br" 00:22:10.203 15:40:40 -- nvmf/common.sh@155 -- # true 00:22:10.203 15:40:40 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:10.474 Cannot find device "nvmf_tgt_br2" 00:22:10.474 15:40:40 -- nvmf/common.sh@156 -- # true 00:22:10.474 15:40:40 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:10.474 15:40:40 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:10.474 Cannot find device "nvmf_tgt_br" 00:22:10.474 15:40:40 -- nvmf/common.sh@158 -- # true 00:22:10.474 15:40:40 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:10.474 Cannot find device "nvmf_tgt_br2" 00:22:10.474 15:40:40 -- nvmf/common.sh@159 -- # true 00:22:10.474 15:40:40 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:10.474 15:40:40 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:10.474 15:40:40 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:10.474 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:10.474 15:40:40 -- nvmf/common.sh@162 -- # true 00:22:10.474 15:40:40 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:10.474 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:10.474 15:40:40 -- nvmf/common.sh@163 -- # true 00:22:10.474 15:40:40 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:10.474 15:40:40 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:10.474 15:40:40 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:10.474 15:40:40 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:10.474 15:40:40 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:10.474 15:40:40 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:10.474 15:40:40 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:10.474 15:40:40 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:10.474 15:40:40 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:10.474 15:40:40 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:10.474 15:40:40 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:10.474 15:40:40 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:10.474 15:40:40 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:10.474 15:40:40 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:10.474 15:40:40 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:10.474 15:40:40 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:10.474 15:40:40 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:10.474 15:40:40 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:10.733 15:40:40 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:10.733 15:40:40 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:10.733 15:40:40 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:10.733 15:40:40 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:10.733 15:40:40 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:10.733 15:40:40 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:10.733 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:10.733 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:22:10.733 00:22:10.733 --- 10.0.0.2 ping statistics --- 00:22:10.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:10.733 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:22:10.733 15:40:40 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:10.733 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:10.733 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.048 ms 00:22:10.733 00:22:10.733 --- 10.0.0.3 ping statistics --- 00:22:10.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:10.733 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:22:10.733 15:40:40 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:10.733 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:10.733 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.050 ms 00:22:10.733 00:22:10.733 --- 10.0.0.1 ping statistics --- 00:22:10.733 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:10.733 rtt min/avg/max/mdev = 0.050/0.050/0.050/0.000 ms 00:22:10.733 15:40:40 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:10.733 15:40:40 -- nvmf/common.sh@422 -- # return 0 00:22:10.733 15:40:40 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:10.733 15:40:40 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:10.733 15:40:40 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:10.733 15:40:40 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:10.733 15:40:40 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:10.733 15:40:40 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:10.733 15:40:40 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:10.733 15:40:40 -- target/nvmf_lvs_grow.sh@98 -- # nvmfappstart -m 0x1 00:22:10.733 15:40:40 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:10.733 15:40:40 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:10.733 15:40:40 -- common/autotest_common.sh@10 -- # set +x 00:22:10.733 15:40:40 -- nvmf/common.sh@470 -- # nvmfpid=72284 00:22:10.733 15:40:40 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:10.733 15:40:40 -- nvmf/common.sh@471 -- # waitforlisten 72284 00:22:10.733 15:40:40 -- common/autotest_common.sh@817 -- # '[' -z 72284 ']' 00:22:10.733 15:40:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:10.733 15:40:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:10.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:10.733 15:40:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:10.733 15:40:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:10.733 15:40:40 -- common/autotest_common.sh@10 -- # set +x 00:22:10.733 [2024-04-26 15:40:40.920137] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:22:10.733 [2024-04-26 15:40:40.920285] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:10.992 [2024-04-26 15:40:41.062862] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:10.992 [2024-04-26 15:40:41.210659] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:10.992 [2024-04-26 15:40:41.210729] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:10.992 [2024-04-26 15:40:41.210755] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:10.992 [2024-04-26 15:40:41.210773] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:10.992 [2024-04-26 15:40:41.210782] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:10.992 [2024-04-26 15:40:41.210834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:11.927 15:40:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:11.927 15:40:41 -- common/autotest_common.sh@850 -- # return 0 00:22:11.927 15:40:41 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:11.927 15:40:41 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:11.927 15:40:41 -- common/autotest_common.sh@10 -- # set +x 00:22:11.927 15:40:41 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:11.927 15:40:41 -- target/nvmf_lvs_grow.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:12.185 [2024-04-26 15:40:42.221395] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:12.185 15:40:42 -- target/nvmf_lvs_grow.sh@101 -- # run_test lvs_grow_clean lvs_grow 00:22:12.185 15:40:42 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:22:12.185 15:40:42 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:12.185 15:40:42 -- common/autotest_common.sh@10 -- # set +x 00:22:12.185 ************************************ 00:22:12.185 START TEST lvs_grow_clean 00:22:12.185 ************************************ 00:22:12.185 15:40:42 -- common/autotest_common.sh@1111 -- # lvs_grow 00:22:12.185 15:40:42 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:22:12.185 15:40:42 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:22:12.185 15:40:42 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:22:12.185 15:40:42 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:22:12.185 15:40:42 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:22:12.185 15:40:42 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:22:12.185 15:40:42 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:22:12.185 15:40:42 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:22:12.185 15:40:42 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:22:12.443 15:40:42 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:22:12.443 15:40:42 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:22:12.709 15:40:42 -- target/nvmf_lvs_grow.sh@28 -- # lvs=c1b0e1b7-af9c-4fcb-a040-883616117651 00:22:12.709 15:40:42 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1b0e1b7-af9c-4fcb-a040-883616117651 00:22:12.709 15:40:42 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:22:12.983 15:40:43 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:22:12.983 15:40:43 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:22:12.983 15:40:43 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u c1b0e1b7-af9c-4fcb-a040-883616117651 lvol 150 00:22:13.239 15:40:43 -- target/nvmf_lvs_grow.sh@33 -- # lvol=7485c160-19e9-4ac7-87f9-d5f3579bcdc4 00:22:13.239 15:40:43 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:22:13.239 15:40:43 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:22:13.497 [2024-04-26 15:40:43.751987] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:22:13.497 [2024-04-26 15:40:43.752082] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:22:13.497 true 00:22:13.497 15:40:43 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1b0e1b7-af9c-4fcb-a040-883616117651 00:22:13.497 15:40:43 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:22:13.755 15:40:44 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:22:13.755 15:40:44 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:22:14.013 15:40:44 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7485c160-19e9-4ac7-87f9-d5f3579bcdc4 00:22:14.271 15:40:44 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:14.529 [2024-04-26 15:40:44.740583] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:14.529 15:40:44 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:14.787 15:40:45 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=72457 00:22:14.787 15:40:45 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:22:14.787 15:40:45 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:14.787 15:40:45 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 72457 /var/tmp/bdevperf.sock 00:22:14.787 15:40:45 -- common/autotest_common.sh@817 -- # '[' -z 72457 ']' 00:22:14.787 15:40:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:14.787 15:40:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:14.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:14.787 15:40:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:14.787 15:40:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:14.787 15:40:45 -- common/autotest_common.sh@10 -- # set +x 00:22:15.045 [2024-04-26 15:40:45.106306] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:22:15.045 [2024-04-26 15:40:45.106396] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72457 ] 00:22:15.045 [2024-04-26 15:40:45.243166] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:15.303 [2024-04-26 15:40:45.358681] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:15.303 15:40:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:15.303 15:40:45 -- common/autotest_common.sh@850 -- # return 0 00:22:15.303 15:40:45 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:22:15.560 Nvme0n1 00:22:15.560 15:40:45 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:22:15.818 [ 00:22:15.818 { 00:22:15.818 "aliases": [ 00:22:15.818 "7485c160-19e9-4ac7-87f9-d5f3579bcdc4" 00:22:15.818 ], 00:22:15.818 "assigned_rate_limits": { 00:22:15.818 "r_mbytes_per_sec": 0, 00:22:15.818 "rw_ios_per_sec": 0, 00:22:15.818 "rw_mbytes_per_sec": 0, 00:22:15.818 "w_mbytes_per_sec": 0 00:22:15.818 }, 00:22:15.818 "block_size": 4096, 00:22:15.818 "claimed": false, 00:22:15.818 "driver_specific": { 00:22:15.818 "mp_policy": "active_passive", 00:22:15.818 "nvme": [ 00:22:15.818 { 00:22:15.818 "ctrlr_data": { 00:22:15.818 "ana_reporting": false, 00:22:15.818 "cntlid": 1, 00:22:15.818 "firmware_revision": "24.05", 00:22:15.818 "model_number": "SPDK bdev Controller", 00:22:15.818 "multi_ctrlr": true, 00:22:15.818 "oacs": { 00:22:15.818 "firmware": 0, 00:22:15.818 "format": 0, 00:22:15.818 "ns_manage": 0, 00:22:15.818 "security": 0 00:22:15.818 }, 00:22:15.818 "serial_number": "SPDK0", 00:22:15.818 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:15.818 "vendor_id": "0x8086" 00:22:15.818 }, 00:22:15.818 "ns_data": { 00:22:15.818 "can_share": true, 00:22:15.818 "id": 1 00:22:15.818 }, 00:22:15.818 "trid": { 00:22:15.818 "adrfam": "IPv4", 00:22:15.818 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:15.818 "traddr": "10.0.0.2", 00:22:15.818 "trsvcid": "4420", 00:22:15.818 "trtype": "TCP" 00:22:15.818 }, 00:22:15.818 "vs": { 00:22:15.818 "nvme_version": "1.3" 00:22:15.818 } 00:22:15.818 } 00:22:15.818 ] 00:22:15.818 }, 00:22:15.818 "memory_domains": [ 00:22:15.818 { 00:22:15.818 "dma_device_id": "system", 00:22:15.818 "dma_device_type": 1 00:22:15.818 } 00:22:15.818 ], 00:22:15.818 "name": "Nvme0n1", 00:22:15.818 "num_blocks": 38912, 00:22:15.818 "product_name": "NVMe disk", 00:22:15.818 "supported_io_types": { 00:22:15.818 "abort": true, 00:22:15.818 "compare": true, 00:22:15.818 "compare_and_write": true, 00:22:15.818 "flush": true, 00:22:15.818 "nvme_admin": true, 00:22:15.818 "nvme_io": true, 00:22:15.818 "read": true, 00:22:15.818 "reset": true, 00:22:15.818 "unmap": true, 00:22:15.818 "write": true, 00:22:15.818 "write_zeroes": true 00:22:15.818 }, 00:22:15.818 "uuid": "7485c160-19e9-4ac7-87f9-d5f3579bcdc4", 00:22:15.818 "zoned": false 00:22:15.818 } 00:22:15.818 ] 00:22:15.818 15:40:46 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:15.818 15:40:46 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=72491 00:22:15.818 15:40:46 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:22:16.076 Running I/O for 10 seconds... 00:22:17.010 Latency(us) 00:22:17.010 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:17.010 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:17.010 Nvme0n1 : 1.00 8160.00 31.88 0.00 0.00 0.00 0.00 0.00 00:22:17.010 =================================================================================================================== 00:22:17.010 Total : 8160.00 31.88 0.00 0.00 0.00 0.00 0.00 00:22:17.010 00:22:17.943 15:40:48 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c1b0e1b7-af9c-4fcb-a040-883616117651 00:22:17.943 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:17.943 Nvme0n1 : 2.00 8344.00 32.59 0.00 0.00 0.00 0.00 0.00 00:22:17.944 =================================================================================================================== 00:22:17.944 Total : 8344.00 32.59 0.00 0.00 0.00 0.00 0.00 00:22:17.944 00:22:18.201 true 00:22:18.201 15:40:48 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:22:18.201 15:40:48 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1b0e1b7-af9c-4fcb-a040-883616117651 00:22:18.459 15:40:48 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:22:18.459 15:40:48 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:22:18.459 15:40:48 -- target/nvmf_lvs_grow.sh@65 -- # wait 72491 00:22:19.025 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:19.025 Nvme0n1 : 3.00 8499.00 33.20 0.00 0.00 0.00 0.00 0.00 00:22:19.025 =================================================================================================================== 00:22:19.025 Total : 8499.00 33.20 0.00 0.00 0.00 0.00 0.00 00:22:19.025 00:22:19.957 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:19.957 Nvme0n1 : 4.00 8520.00 33.28 0.00 0.00 0.00 0.00 0.00 00:22:19.957 =================================================================================================================== 00:22:19.957 Total : 8520.00 33.28 0.00 0.00 0.00 0.00 0.00 00:22:19.957 00:22:20.891 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:20.891 Nvme0n1 : 5.00 8518.40 33.27 0.00 0.00 0.00 0.00 0.00 00:22:20.891 =================================================================================================================== 00:22:20.891 Total : 8518.40 33.27 0.00 0.00 0.00 0.00 0.00 00:22:20.891 00:22:22.301 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:22.301 Nvme0n1 : 6.00 8485.17 33.15 0.00 0.00 0.00 0.00 0.00 00:22:22.301 =================================================================================================================== 00:22:22.301 Total : 8485.17 33.15 0.00 0.00 0.00 0.00 0.00 00:22:22.301 00:22:23.233 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:23.233 Nvme0n1 : 7.00 8429.86 32.93 0.00 0.00 0.00 0.00 0.00 00:22:23.233 =================================================================================================================== 00:22:23.233 Total : 8429.86 32.93 0.00 0.00 0.00 0.00 0.00 00:22:23.233 00:22:24.167 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:24.167 Nvme0n1 : 8.00 8429.12 32.93 0.00 0.00 0.00 0.00 0.00 00:22:24.167 =================================================================================================================== 00:22:24.167 Total : 8429.12 32.93 0.00 0.00 0.00 0.00 0.00 00:22:24.167 00:22:25.108 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:25.108 Nvme0n1 : 9.00 8416.67 32.88 0.00 0.00 0.00 0.00 0.00 00:22:25.108 =================================================================================================================== 00:22:25.108 Total : 8416.67 32.88 0.00 0.00 0.00 0.00 0.00 00:22:25.108 00:22:26.054 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:26.054 Nvme0n1 : 10.00 8407.40 32.84 0.00 0.00 0.00 0.00 0.00 00:22:26.054 =================================================================================================================== 00:22:26.054 Total : 8407.40 32.84 0.00 0.00 0.00 0.00 0.00 00:22:26.054 00:22:26.054 00:22:26.054 Latency(us) 00:22:26.054 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:26.055 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:26.055 Nvme0n1 : 10.01 8409.81 32.85 0.00 0.00 15215.78 7000.44 45994.36 00:22:26.055 =================================================================================================================== 00:22:26.055 Total : 8409.81 32.85 0.00 0.00 15215.78 7000.44 45994.36 00:22:26.055 0 00:22:26.055 15:40:56 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 72457 00:22:26.055 15:40:56 -- common/autotest_common.sh@936 -- # '[' -z 72457 ']' 00:22:26.055 15:40:56 -- common/autotest_common.sh@940 -- # kill -0 72457 00:22:26.055 15:40:56 -- common/autotest_common.sh@941 -- # uname 00:22:26.055 15:40:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:26.055 15:40:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72457 00:22:26.055 15:40:56 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:26.055 killing process with pid 72457 00:22:26.055 15:40:56 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:26.055 15:40:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72457' 00:22:26.055 Received shutdown signal, test time was about 10.000000 seconds 00:22:26.055 00:22:26.055 Latency(us) 00:22:26.055 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:26.055 =================================================================================================================== 00:22:26.055 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:26.055 15:40:56 -- common/autotest_common.sh@955 -- # kill 72457 00:22:26.055 15:40:56 -- common/autotest_common.sh@960 -- # wait 72457 00:22:26.312 15:40:56 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:26.570 15:40:56 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:22:26.570 15:40:56 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1b0e1b7-af9c-4fcb-a040-883616117651 00:22:26.828 15:40:57 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:22:26.828 15:40:57 -- target/nvmf_lvs_grow.sh@71 -- # [[ '' == \d\i\r\t\y ]] 00:22:26.828 15:40:57 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:22:27.086 [2024-04-26 15:40:57.231775] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:22:27.086 15:40:57 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1b0e1b7-af9c-4fcb-a040-883616117651 00:22:27.086 15:40:57 -- common/autotest_common.sh@638 -- # local es=0 00:22:27.086 15:40:57 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1b0e1b7-af9c-4fcb-a040-883616117651 00:22:27.086 15:40:57 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:27.086 15:40:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:27.086 15:40:57 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:27.086 15:40:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:27.086 15:40:57 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:27.086 15:40:57 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:27.086 15:40:57 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:27.086 15:40:57 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:22:27.086 15:40:57 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1b0e1b7-af9c-4fcb-a040-883616117651 00:22:27.344 2024/04/26 15:40:57 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:c1b0e1b7-af9c-4fcb-a040-883616117651], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:22:27.344 request: 00:22:27.344 { 00:22:27.344 "method": "bdev_lvol_get_lvstores", 00:22:27.344 "params": { 00:22:27.344 "uuid": "c1b0e1b7-af9c-4fcb-a040-883616117651" 00:22:27.344 } 00:22:27.344 } 00:22:27.344 Got JSON-RPC error response 00:22:27.344 GoRPCClient: error on JSON-RPC call 00:22:27.344 15:40:57 -- common/autotest_common.sh@641 -- # es=1 00:22:27.344 15:40:57 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:22:27.344 15:40:57 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:22:27.344 15:40:57 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:22:27.344 15:40:57 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:22:27.601 aio_bdev 00:22:27.601 15:40:57 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 7485c160-19e9-4ac7-87f9-d5f3579bcdc4 00:22:27.601 15:40:57 -- common/autotest_common.sh@885 -- # local bdev_name=7485c160-19e9-4ac7-87f9-d5f3579bcdc4 00:22:27.601 15:40:57 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:22:27.601 15:40:57 -- common/autotest_common.sh@887 -- # local i 00:22:27.601 15:40:57 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:22:27.601 15:40:57 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:22:27.601 15:40:57 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:22:27.859 15:40:58 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 7485c160-19e9-4ac7-87f9-d5f3579bcdc4 -t 2000 00:22:28.117 [ 00:22:28.117 { 00:22:28.117 "aliases": [ 00:22:28.117 "lvs/lvol" 00:22:28.117 ], 00:22:28.117 "assigned_rate_limits": { 00:22:28.117 "r_mbytes_per_sec": 0, 00:22:28.117 "rw_ios_per_sec": 0, 00:22:28.117 "rw_mbytes_per_sec": 0, 00:22:28.117 "w_mbytes_per_sec": 0 00:22:28.117 }, 00:22:28.117 "block_size": 4096, 00:22:28.117 "claimed": false, 00:22:28.117 "driver_specific": { 00:22:28.117 "lvol": { 00:22:28.117 "base_bdev": "aio_bdev", 00:22:28.117 "clone": false, 00:22:28.117 "esnap_clone": false, 00:22:28.118 "lvol_store_uuid": "c1b0e1b7-af9c-4fcb-a040-883616117651", 00:22:28.118 "snapshot": false, 00:22:28.118 "thin_provision": false 00:22:28.118 } 00:22:28.118 }, 00:22:28.118 "name": "7485c160-19e9-4ac7-87f9-d5f3579bcdc4", 00:22:28.118 "num_blocks": 38912, 00:22:28.118 "product_name": "Logical Volume", 00:22:28.118 "supported_io_types": { 00:22:28.118 "abort": false, 00:22:28.118 "compare": false, 00:22:28.118 "compare_and_write": false, 00:22:28.118 "flush": false, 00:22:28.118 "nvme_admin": false, 00:22:28.118 "nvme_io": false, 00:22:28.118 "read": true, 00:22:28.118 "reset": true, 00:22:28.118 "unmap": true, 00:22:28.118 "write": true, 00:22:28.118 "write_zeroes": true 00:22:28.118 }, 00:22:28.118 "uuid": "7485c160-19e9-4ac7-87f9-d5f3579bcdc4", 00:22:28.118 "zoned": false 00:22:28.118 } 00:22:28.118 ] 00:22:28.118 15:40:58 -- common/autotest_common.sh@893 -- # return 0 00:22:28.118 15:40:58 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1b0e1b7-af9c-4fcb-a040-883616117651 00:22:28.118 15:40:58 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:22:28.459 15:40:58 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:22:28.459 15:40:58 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:22:28.459 15:40:58 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c1b0e1b7-af9c-4fcb-a040-883616117651 00:22:28.717 15:40:58 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:22:28.717 15:40:58 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 7485c160-19e9-4ac7-87f9-d5f3579bcdc4 00:22:28.974 15:40:59 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c1b0e1b7-af9c-4fcb-a040-883616117651 00:22:29.231 15:40:59 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:22:29.503 15:40:59 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:22:30.068 ************************************ 00:22:30.068 END TEST lvs_grow_clean 00:22:30.068 ************************************ 00:22:30.068 00:22:30.068 real 0m17.844s 00:22:30.068 user 0m17.024s 00:22:30.068 sys 0m2.162s 00:22:30.068 15:41:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:30.068 15:41:00 -- common/autotest_common.sh@10 -- # set +x 00:22:30.068 15:41:00 -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_dirty lvs_grow dirty 00:22:30.068 15:41:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:30.068 15:41:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:30.068 15:41:00 -- common/autotest_common.sh@10 -- # set +x 00:22:30.068 ************************************ 00:22:30.068 START TEST lvs_grow_dirty 00:22:30.068 ************************************ 00:22:30.068 15:41:00 -- common/autotest_common.sh@1111 -- # lvs_grow dirty 00:22:30.068 15:41:00 -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:22:30.069 15:41:00 -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:22:30.069 15:41:00 -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:22:30.069 15:41:00 -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:22:30.069 15:41:00 -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:22:30.069 15:41:00 -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:22:30.069 15:41:00 -- target/nvmf_lvs_grow.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:22:30.069 15:41:00 -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:22:30.069 15:41:00 -- target/nvmf_lvs_grow.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:22:30.325 15:41:00 -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:22:30.325 15:41:00 -- target/nvmf_lvs_grow.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:22:30.582 15:41:00 -- target/nvmf_lvs_grow.sh@28 -- # lvs=cedb6f44-9182-4e97-9f23-40472e278ffc 00:22:30.582 15:41:00 -- target/nvmf_lvs_grow.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cedb6f44-9182-4e97-9f23-40472e278ffc 00:22:30.582 15:41:00 -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:22:30.840 15:41:01 -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:22:30.840 15:41:01 -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:22:30.840 15:41:01 -- target/nvmf_lvs_grow.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create -u cedb6f44-9182-4e97-9f23-40472e278ffc lvol 150 00:22:31.098 15:41:01 -- target/nvmf_lvs_grow.sh@33 -- # lvol=0cff0fbe-465c-4631-a086-679626b717dc 00:22:31.098 15:41:01 -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:22:31.098 15:41:01 -- target/nvmf_lvs_grow.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:22:31.375 [2024-04-26 15:41:01.555158] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:22:31.375 [2024-04-26 15:41:01.555293] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:22:31.375 true 00:22:31.375 15:41:01 -- target/nvmf_lvs_grow.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cedb6f44-9182-4e97-9f23-40472e278ffc 00:22:31.375 15:41:01 -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:22:31.634 15:41:01 -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:22:31.634 15:41:01 -- target/nvmf_lvs_grow.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:22:31.892 15:41:02 -- target/nvmf_lvs_grow.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 0cff0fbe-465c-4631-a086-679626b717dc 00:22:32.160 15:41:02 -- target/nvmf_lvs_grow.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:32.428 15:41:02 -- target/nvmf_lvs_grow.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:32.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:32.686 15:41:02 -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=72879 00:22:32.686 15:41:02 -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:32.686 15:41:02 -- target/nvmf_lvs_grow.sh@47 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:22:32.686 15:41:02 -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 72879 /var/tmp/bdevperf.sock 00:22:32.686 15:41:02 -- common/autotest_common.sh@817 -- # '[' -z 72879 ']' 00:22:32.686 15:41:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:32.686 15:41:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:32.686 15:41:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:32.686 15:41:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:32.686 15:41:02 -- common/autotest_common.sh@10 -- # set +x 00:22:32.944 [2024-04-26 15:41:03.023501] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:22:32.944 [2024-04-26 15:41:03.023601] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72879 ] 00:22:32.944 [2024-04-26 15:41:03.163123] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.201 [2024-04-26 15:41:03.311518] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:33.767 15:41:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:33.767 15:41:03 -- common/autotest_common.sh@850 -- # return 0 00:22:33.767 15:41:03 -- target/nvmf_lvs_grow.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:22:34.025 Nvme0n1 00:22:34.025 15:41:04 -- target/nvmf_lvs_grow.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:22:34.591 [ 00:22:34.592 { 00:22:34.592 "aliases": [ 00:22:34.592 "0cff0fbe-465c-4631-a086-679626b717dc" 00:22:34.592 ], 00:22:34.592 "assigned_rate_limits": { 00:22:34.592 "r_mbytes_per_sec": 0, 00:22:34.592 "rw_ios_per_sec": 0, 00:22:34.592 "rw_mbytes_per_sec": 0, 00:22:34.592 "w_mbytes_per_sec": 0 00:22:34.592 }, 00:22:34.592 "block_size": 4096, 00:22:34.592 "claimed": false, 00:22:34.592 "driver_specific": { 00:22:34.592 "mp_policy": "active_passive", 00:22:34.592 "nvme": [ 00:22:34.592 { 00:22:34.592 "ctrlr_data": { 00:22:34.592 "ana_reporting": false, 00:22:34.592 "cntlid": 1, 00:22:34.592 "firmware_revision": "24.05", 00:22:34.592 "model_number": "SPDK bdev Controller", 00:22:34.592 "multi_ctrlr": true, 00:22:34.592 "oacs": { 00:22:34.592 "firmware": 0, 00:22:34.592 "format": 0, 00:22:34.592 "ns_manage": 0, 00:22:34.592 "security": 0 00:22:34.592 }, 00:22:34.592 "serial_number": "SPDK0", 00:22:34.592 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:34.592 "vendor_id": "0x8086" 00:22:34.592 }, 00:22:34.592 "ns_data": { 00:22:34.592 "can_share": true, 00:22:34.592 "id": 1 00:22:34.592 }, 00:22:34.592 "trid": { 00:22:34.592 "adrfam": "IPv4", 00:22:34.592 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:34.592 "traddr": "10.0.0.2", 00:22:34.592 "trsvcid": "4420", 00:22:34.592 "trtype": "TCP" 00:22:34.592 }, 00:22:34.592 "vs": { 00:22:34.592 "nvme_version": "1.3" 00:22:34.592 } 00:22:34.592 } 00:22:34.592 ] 00:22:34.592 }, 00:22:34.592 "memory_domains": [ 00:22:34.592 { 00:22:34.592 "dma_device_id": "system", 00:22:34.592 "dma_device_type": 1 00:22:34.592 } 00:22:34.592 ], 00:22:34.592 "name": "Nvme0n1", 00:22:34.592 "num_blocks": 38912, 00:22:34.592 "product_name": "NVMe disk", 00:22:34.592 "supported_io_types": { 00:22:34.592 "abort": true, 00:22:34.592 "compare": true, 00:22:34.592 "compare_and_write": true, 00:22:34.592 "flush": true, 00:22:34.592 "nvme_admin": true, 00:22:34.592 "nvme_io": true, 00:22:34.592 "read": true, 00:22:34.592 "reset": true, 00:22:34.592 "unmap": true, 00:22:34.592 "write": true, 00:22:34.592 "write_zeroes": true 00:22:34.592 }, 00:22:34.592 "uuid": "0cff0fbe-465c-4631-a086-679626b717dc", 00:22:34.592 "zoned": false 00:22:34.592 } 00:22:34.592 ] 00:22:34.592 15:41:04 -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=72927 00:22:34.592 15:41:04 -- target/nvmf_lvs_grow.sh@55 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:34.592 15:41:04 -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:22:34.592 Running I/O for 10 seconds... 00:22:35.525 Latency(us) 00:22:35.525 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:35.525 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:35.525 Nvme0n1 : 1.00 7259.00 28.36 0.00 0.00 0.00 0.00 0.00 00:22:35.525 =================================================================================================================== 00:22:35.525 Total : 7259.00 28.36 0.00 0.00 0.00 0.00 0.00 00:22:35.525 00:22:36.461 15:41:06 -- target/nvmf_lvs_grow.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u cedb6f44-9182-4e97-9f23-40472e278ffc 00:22:36.461 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:36.461 Nvme0n1 : 2.00 7095.00 27.71 0.00 0.00 0.00 0.00 0.00 00:22:36.461 =================================================================================================================== 00:22:36.461 Total : 7095.00 27.71 0.00 0.00 0.00 0.00 0.00 00:22:36.461 00:22:36.746 true 00:22:36.746 15:41:06 -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:22:36.746 15:41:06 -- target/nvmf_lvs_grow.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cedb6f44-9182-4e97-9f23-40472e278ffc 00:22:37.004 15:41:07 -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:22:37.004 15:41:07 -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:22:37.004 15:41:07 -- target/nvmf_lvs_grow.sh@65 -- # wait 72927 00:22:37.571 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:37.571 Nvme0n1 : 3.00 7116.67 27.80 0.00 0.00 0.00 0.00 0.00 00:22:37.571 =================================================================================================================== 00:22:37.571 Total : 7116.67 27.80 0.00 0.00 0.00 0.00 0.00 00:22:37.571 00:22:38.505 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:38.505 Nvme0n1 : 4.00 7009.25 27.38 0.00 0.00 0.00 0.00 0.00 00:22:38.505 =================================================================================================================== 00:22:38.505 Total : 7009.25 27.38 0.00 0.00 0.00 0.00 0.00 00:22:38.505 00:22:39.878 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:39.878 Nvme0n1 : 5.00 7081.40 27.66 0.00 0.00 0.00 0.00 0.00 00:22:39.878 =================================================================================================================== 00:22:39.878 Total : 7081.40 27.66 0.00 0.00 0.00 0.00 0.00 00:22:39.878 00:22:40.444 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:40.444 Nvme0n1 : 6.00 7037.67 27.49 0.00 0.00 0.00 0.00 0.00 00:22:40.444 =================================================================================================================== 00:22:40.444 Total : 7037.67 27.49 0.00 0.00 0.00 0.00 0.00 00:22:40.444 00:22:41.815 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:41.815 Nvme0n1 : 7.00 6790.14 26.52 0.00 0.00 0.00 0.00 0.00 00:22:41.815 =================================================================================================================== 00:22:41.815 Total : 6790.14 26.52 0.00 0.00 0.00 0.00 0.00 00:22:41.815 00:22:42.748 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:42.748 Nvme0n1 : 8.00 6784.12 26.50 0.00 0.00 0.00 0.00 0.00 00:22:42.748 =================================================================================================================== 00:22:42.748 Total : 6784.12 26.50 0.00 0.00 0.00 0.00 0.00 00:22:42.748 00:22:43.774 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:43.774 Nvme0n1 : 9.00 6803.67 26.58 0.00 0.00 0.00 0.00 0.00 00:22:43.774 =================================================================================================================== 00:22:43.774 Total : 6803.67 26.58 0.00 0.00 0.00 0.00 0.00 00:22:43.774 00:22:44.707 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:44.707 Nvme0n1 : 10.00 6821.50 26.65 0.00 0.00 0.00 0.00 0.00 00:22:44.707 =================================================================================================================== 00:22:44.707 Total : 6821.50 26.65 0.00 0.00 0.00 0.00 0.00 00:22:44.707 00:22:44.707 00:22:44.707 Latency(us) 00:22:44.707 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:44.707 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:22:44.707 Nvme0n1 : 10.01 6827.48 26.67 0.00 0.00 18742.97 7804.74 238312.73 00:22:44.707 =================================================================================================================== 00:22:44.707 Total : 6827.48 26.67 0.00 0.00 18742.97 7804.74 238312.73 00:22:44.707 0 00:22:44.707 15:41:14 -- target/nvmf_lvs_grow.sh@66 -- # killprocess 72879 00:22:44.707 15:41:14 -- common/autotest_common.sh@936 -- # '[' -z 72879 ']' 00:22:44.708 15:41:14 -- common/autotest_common.sh@940 -- # kill -0 72879 00:22:44.708 15:41:14 -- common/autotest_common.sh@941 -- # uname 00:22:44.708 15:41:14 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:44.708 15:41:14 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 72879 00:22:44.708 killing process with pid 72879 00:22:44.708 Received shutdown signal, test time was about 10.000000 seconds 00:22:44.708 00:22:44.708 Latency(us) 00:22:44.708 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:44.708 =================================================================================================================== 00:22:44.708 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:44.708 15:41:14 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:22:44.708 15:41:14 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:22:44.708 15:41:14 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 72879' 00:22:44.708 15:41:14 -- common/autotest_common.sh@955 -- # kill 72879 00:22:44.708 15:41:14 -- common/autotest_common.sh@960 -- # wait 72879 00:22:44.965 15:41:15 -- target/nvmf_lvs_grow.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:22:45.223 15:41:15 -- target/nvmf_lvs_grow.sh@69 -- # jq -r '.[0].free_clusters' 00:22:45.223 15:41:15 -- target/nvmf_lvs_grow.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cedb6f44-9182-4e97-9f23-40472e278ffc 00:22:45.481 15:41:15 -- target/nvmf_lvs_grow.sh@69 -- # free_clusters=61 00:22:45.481 15:41:15 -- target/nvmf_lvs_grow.sh@71 -- # [[ dirty == \d\i\r\t\y ]] 00:22:45.481 15:41:15 -- target/nvmf_lvs_grow.sh@73 -- # kill -9 72284 00:22:45.481 15:41:15 -- target/nvmf_lvs_grow.sh@74 -- # wait 72284 00:22:45.481 /home/vagrant/spdk_repo/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 74: 72284 Killed "${NVMF_APP[@]}" "$@" 00:22:45.481 15:41:15 -- target/nvmf_lvs_grow.sh@74 -- # true 00:22:45.481 15:41:15 -- target/nvmf_lvs_grow.sh@75 -- # nvmfappstart -m 0x1 00:22:45.481 15:41:15 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:45.481 15:41:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:45.481 15:41:15 -- common/autotest_common.sh@10 -- # set +x 00:22:45.481 15:41:15 -- nvmf/common.sh@470 -- # nvmfpid=73083 00:22:45.481 15:41:15 -- nvmf/common.sh@471 -- # waitforlisten 73083 00:22:45.481 15:41:15 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:45.481 15:41:15 -- common/autotest_common.sh@817 -- # '[' -z 73083 ']' 00:22:45.481 15:41:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:45.481 15:41:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:45.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:45.481 15:41:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:45.481 15:41:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:45.481 15:41:15 -- common/autotest_common.sh@10 -- # set +x 00:22:45.481 [2024-04-26 15:41:15.715031] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:22:45.481 [2024-04-26 15:41:15.715828] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:45.739 [2024-04-26 15:41:15.855236] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:45.739 [2024-04-26 15:41:16.003388] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:45.739 [2024-04-26 15:41:16.003473] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:45.739 [2024-04-26 15:41:16.003487] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:45.739 [2024-04-26 15:41:16.003497] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:45.739 [2024-04-26 15:41:16.003506] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:45.739 [2024-04-26 15:41:16.003555] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:46.671 15:41:16 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:46.671 15:41:16 -- common/autotest_common.sh@850 -- # return 0 00:22:46.671 15:41:16 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:46.671 15:41:16 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:46.671 15:41:16 -- common/autotest_common.sh@10 -- # set +x 00:22:46.671 15:41:16 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:46.671 15:41:16 -- target/nvmf_lvs_grow.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:22:46.928 [2024-04-26 15:41:17.040862] blobstore.c:4779:bs_recover: *NOTICE*: Performing recovery on blobstore 00:22:46.928 [2024-04-26 15:41:17.041268] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:22:46.928 [2024-04-26 15:41:17.041515] blobstore.c:4726:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:22:46.928 15:41:17 -- target/nvmf_lvs_grow.sh@76 -- # aio_bdev=aio_bdev 00:22:46.928 15:41:17 -- target/nvmf_lvs_grow.sh@77 -- # waitforbdev 0cff0fbe-465c-4631-a086-679626b717dc 00:22:46.928 15:41:17 -- common/autotest_common.sh@885 -- # local bdev_name=0cff0fbe-465c-4631-a086-679626b717dc 00:22:46.928 15:41:17 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:22:46.928 15:41:17 -- common/autotest_common.sh@887 -- # local i 00:22:46.928 15:41:17 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:22:46.928 15:41:17 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:22:46.928 15:41:17 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:22:47.185 15:41:17 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0cff0fbe-465c-4631-a086-679626b717dc -t 2000 00:22:47.443 [ 00:22:47.443 { 00:22:47.443 "aliases": [ 00:22:47.443 "lvs/lvol" 00:22:47.443 ], 00:22:47.443 "assigned_rate_limits": { 00:22:47.443 "r_mbytes_per_sec": 0, 00:22:47.443 "rw_ios_per_sec": 0, 00:22:47.443 "rw_mbytes_per_sec": 0, 00:22:47.443 "w_mbytes_per_sec": 0 00:22:47.443 }, 00:22:47.443 "block_size": 4096, 00:22:47.443 "claimed": false, 00:22:47.443 "driver_specific": { 00:22:47.443 "lvol": { 00:22:47.443 "base_bdev": "aio_bdev", 00:22:47.443 "clone": false, 00:22:47.443 "esnap_clone": false, 00:22:47.443 "lvol_store_uuid": "cedb6f44-9182-4e97-9f23-40472e278ffc", 00:22:47.443 "snapshot": false, 00:22:47.443 "thin_provision": false 00:22:47.443 } 00:22:47.443 }, 00:22:47.443 "name": "0cff0fbe-465c-4631-a086-679626b717dc", 00:22:47.443 "num_blocks": 38912, 00:22:47.443 "product_name": "Logical Volume", 00:22:47.443 "supported_io_types": { 00:22:47.443 "abort": false, 00:22:47.443 "compare": false, 00:22:47.443 "compare_and_write": false, 00:22:47.443 "flush": false, 00:22:47.443 "nvme_admin": false, 00:22:47.443 "nvme_io": false, 00:22:47.443 "read": true, 00:22:47.443 "reset": true, 00:22:47.443 "unmap": true, 00:22:47.443 "write": true, 00:22:47.443 "write_zeroes": true 00:22:47.443 }, 00:22:47.443 "uuid": "0cff0fbe-465c-4631-a086-679626b717dc", 00:22:47.443 "zoned": false 00:22:47.443 } 00:22:47.443 ] 00:22:47.443 15:41:17 -- common/autotest_common.sh@893 -- # return 0 00:22:47.443 15:41:17 -- target/nvmf_lvs_grow.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cedb6f44-9182-4e97-9f23-40472e278ffc 00:22:47.443 15:41:17 -- target/nvmf_lvs_grow.sh@78 -- # jq -r '.[0].free_clusters' 00:22:47.700 15:41:17 -- target/nvmf_lvs_grow.sh@78 -- # (( free_clusters == 61 )) 00:22:47.700 15:41:17 -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].total_data_clusters' 00:22:47.700 15:41:17 -- target/nvmf_lvs_grow.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cedb6f44-9182-4e97-9f23-40472e278ffc 00:22:48.263 15:41:18 -- target/nvmf_lvs_grow.sh@79 -- # (( data_clusters == 99 )) 00:22:48.263 15:41:18 -- target/nvmf_lvs_grow.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:22:48.263 [2024-04-26 15:41:18.493741] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:22:48.263 15:41:18 -- target/nvmf_lvs_grow.sh@84 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cedb6f44-9182-4e97-9f23-40472e278ffc 00:22:48.263 15:41:18 -- common/autotest_common.sh@638 -- # local es=0 00:22:48.263 15:41:18 -- common/autotest_common.sh@640 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cedb6f44-9182-4e97-9f23-40472e278ffc 00:22:48.263 15:41:18 -- common/autotest_common.sh@626 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:48.520 15:41:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:48.520 15:41:18 -- common/autotest_common.sh@630 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:48.520 15:41:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:48.520 15:41:18 -- common/autotest_common.sh@632 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:48.520 15:41:18 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:22:48.520 15:41:18 -- common/autotest_common.sh@632 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:22:48.520 15:41:18 -- common/autotest_common.sh@632 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:22:48.520 15:41:18 -- common/autotest_common.sh@641 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cedb6f44-9182-4e97-9f23-40472e278ffc 00:22:48.521 2024/04/26 15:41:18 error on JSON-RPC call, method: bdev_lvol_get_lvstores, params: map[uuid:cedb6f44-9182-4e97-9f23-40472e278ffc], err: error received for bdev_lvol_get_lvstores method, err: Code=-19 Msg=No such device 00:22:48.521 request: 00:22:48.521 { 00:22:48.521 "method": "bdev_lvol_get_lvstores", 00:22:48.521 "params": { 00:22:48.521 "uuid": "cedb6f44-9182-4e97-9f23-40472e278ffc" 00:22:48.521 } 00:22:48.521 } 00:22:48.521 Got JSON-RPC error response 00:22:48.521 GoRPCClient: error on JSON-RPC call 00:22:48.521 15:41:18 -- common/autotest_common.sh@641 -- # es=1 00:22:48.521 15:41:18 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:22:48.521 15:41:18 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:22:48.521 15:41:18 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:22:48.521 15:41:18 -- target/nvmf_lvs_grow.sh@85 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_create /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:22:49.086 aio_bdev 00:22:49.086 15:41:19 -- target/nvmf_lvs_grow.sh@86 -- # waitforbdev 0cff0fbe-465c-4631-a086-679626b717dc 00:22:49.086 15:41:19 -- common/autotest_common.sh@885 -- # local bdev_name=0cff0fbe-465c-4631-a086-679626b717dc 00:22:49.086 15:41:19 -- common/autotest_common.sh@886 -- # local bdev_timeout= 00:22:49.086 15:41:19 -- common/autotest_common.sh@887 -- # local i 00:22:49.086 15:41:19 -- common/autotest_common.sh@888 -- # [[ -z '' ]] 00:22:49.086 15:41:19 -- common/autotest_common.sh@888 -- # bdev_timeout=2000 00:22:49.086 15:41:19 -- common/autotest_common.sh@890 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:22:49.342 15:41:19 -- common/autotest_common.sh@892 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 0cff0fbe-465c-4631-a086-679626b717dc -t 2000 00:22:49.342 [ 00:22:49.342 { 00:22:49.342 "aliases": [ 00:22:49.342 "lvs/lvol" 00:22:49.342 ], 00:22:49.342 "assigned_rate_limits": { 00:22:49.342 "r_mbytes_per_sec": 0, 00:22:49.343 "rw_ios_per_sec": 0, 00:22:49.343 "rw_mbytes_per_sec": 0, 00:22:49.343 "w_mbytes_per_sec": 0 00:22:49.343 }, 00:22:49.343 "block_size": 4096, 00:22:49.343 "claimed": false, 00:22:49.343 "driver_specific": { 00:22:49.343 "lvol": { 00:22:49.343 "base_bdev": "aio_bdev", 00:22:49.343 "clone": false, 00:22:49.343 "esnap_clone": false, 00:22:49.343 "lvol_store_uuid": "cedb6f44-9182-4e97-9f23-40472e278ffc", 00:22:49.343 "snapshot": false, 00:22:49.343 "thin_provision": false 00:22:49.343 } 00:22:49.343 }, 00:22:49.343 "name": "0cff0fbe-465c-4631-a086-679626b717dc", 00:22:49.343 "num_blocks": 38912, 00:22:49.343 "product_name": "Logical Volume", 00:22:49.343 "supported_io_types": { 00:22:49.343 "abort": false, 00:22:49.343 "compare": false, 00:22:49.343 "compare_and_write": false, 00:22:49.343 "flush": false, 00:22:49.343 "nvme_admin": false, 00:22:49.343 "nvme_io": false, 00:22:49.343 "read": true, 00:22:49.343 "reset": true, 00:22:49.343 "unmap": true, 00:22:49.343 "write": true, 00:22:49.343 "write_zeroes": true 00:22:49.343 }, 00:22:49.343 "uuid": "0cff0fbe-465c-4631-a086-679626b717dc", 00:22:49.343 "zoned": false 00:22:49.343 } 00:22:49.343 ] 00:22:49.600 15:41:19 -- common/autotest_common.sh@893 -- # return 0 00:22:49.600 15:41:19 -- target/nvmf_lvs_grow.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cedb6f44-9182-4e97-9f23-40472e278ffc 00:22:49.600 15:41:19 -- target/nvmf_lvs_grow.sh@87 -- # jq -r '.[0].free_clusters' 00:22:49.880 15:41:19 -- target/nvmf_lvs_grow.sh@87 -- # (( free_clusters == 61 )) 00:22:49.880 15:41:19 -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].total_data_clusters' 00:22:49.880 15:41:19 -- target/nvmf_lvs_grow.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u cedb6f44-9182-4e97-9f23-40472e278ffc 00:22:50.138 15:41:20 -- target/nvmf_lvs_grow.sh@88 -- # (( data_clusters == 99 )) 00:22:50.138 15:41:20 -- target/nvmf_lvs_grow.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete 0cff0fbe-465c-4631-a086-679626b717dc 00:22:50.395 15:41:20 -- target/nvmf_lvs_grow.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cedb6f44-9182-4e97-9f23-40472e278ffc 00:22:50.653 15:41:20 -- target/nvmf_lvs_grow.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:22:50.910 15:41:21 -- target/nvmf_lvs_grow.sh@94 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/target/aio_bdev 00:22:51.168 ************************************ 00:22:51.168 END TEST lvs_grow_dirty 00:22:51.168 ************************************ 00:22:51.168 00:22:51.168 real 0m21.185s 00:22:51.168 user 0m42.981s 00:22:51.168 sys 0m8.316s 00:22:51.168 15:41:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:51.168 15:41:21 -- common/autotest_common.sh@10 -- # set +x 00:22:51.425 15:41:21 -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:22:51.425 15:41:21 -- common/autotest_common.sh@794 -- # type=--id 00:22:51.425 15:41:21 -- common/autotest_common.sh@795 -- # id=0 00:22:51.425 15:41:21 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:22:51.425 15:41:21 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:51.425 15:41:21 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:22:51.425 15:41:21 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:22:51.425 15:41:21 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:22:51.425 15:41:21 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:51.425 nvmf_trace.0 00:22:51.425 15:41:21 -- common/autotest_common.sh@809 -- # return 0 00:22:51.425 15:41:21 -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:22:51.425 15:41:21 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:51.425 15:41:21 -- nvmf/common.sh@117 -- # sync 00:22:51.684 15:41:21 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:51.684 15:41:21 -- nvmf/common.sh@120 -- # set +e 00:22:51.684 15:41:21 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:51.684 15:41:21 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:51.684 rmmod nvme_tcp 00:22:51.684 rmmod nvme_fabrics 00:22:51.684 rmmod nvme_keyring 00:22:51.684 15:41:21 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:51.684 15:41:21 -- nvmf/common.sh@124 -- # set -e 00:22:51.684 15:41:21 -- nvmf/common.sh@125 -- # return 0 00:22:51.684 15:41:21 -- nvmf/common.sh@478 -- # '[' -n 73083 ']' 00:22:51.684 15:41:21 -- nvmf/common.sh@479 -- # killprocess 73083 00:22:51.684 15:41:21 -- common/autotest_common.sh@936 -- # '[' -z 73083 ']' 00:22:51.684 15:41:21 -- common/autotest_common.sh@940 -- # kill -0 73083 00:22:51.684 15:41:21 -- common/autotest_common.sh@941 -- # uname 00:22:51.684 15:41:21 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:51.684 15:41:21 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73083 00:22:51.684 killing process with pid 73083 00:22:51.684 15:41:21 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:51.684 15:41:21 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:51.684 15:41:21 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73083' 00:22:51.684 15:41:21 -- common/autotest_common.sh@955 -- # kill 73083 00:22:51.684 15:41:21 -- common/autotest_common.sh@960 -- # wait 73083 00:22:51.942 15:41:22 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:51.942 15:41:22 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:51.942 15:41:22 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:51.942 15:41:22 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:51.942 15:41:22 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:51.942 15:41:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:51.942 15:41:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:51.942 15:41:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:51.943 15:41:22 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:51.943 ************************************ 00:22:51.943 END TEST nvmf_lvs_grow 00:22:51.943 ************************************ 00:22:51.943 00:22:51.943 real 0m41.839s 00:22:51.943 user 1m7.024s 00:22:51.943 sys 0m11.360s 00:22:51.943 15:41:22 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:51.943 15:41:22 -- common/autotest_common.sh@10 -- # set +x 00:22:51.943 15:41:22 -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:22:51.943 15:41:22 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:51.943 15:41:22 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:51.943 15:41:22 -- common/autotest_common.sh@10 -- # set +x 00:22:52.201 ************************************ 00:22:52.201 START TEST nvmf_bdev_io_wait 00:22:52.201 ************************************ 00:22:52.201 15:41:22 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:22:52.201 * Looking for test storage... 00:22:52.201 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:52.201 15:41:22 -- target/bdev_io_wait.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:52.201 15:41:22 -- nvmf/common.sh@7 -- # uname -s 00:22:52.201 15:41:22 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:52.201 15:41:22 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:52.201 15:41:22 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:52.201 15:41:22 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:52.201 15:41:22 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:52.201 15:41:22 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:52.201 15:41:22 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:52.201 15:41:22 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:52.201 15:41:22 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:52.201 15:41:22 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:52.201 15:41:22 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:22:52.201 15:41:22 -- nvmf/common.sh@18 -- # NVME_HOSTID=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:22:52.201 15:41:22 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:52.201 15:41:22 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:52.201 15:41:22 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:52.201 15:41:22 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:52.201 15:41:22 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:52.201 15:41:22 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:52.201 15:41:22 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:52.201 15:41:22 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:52.201 15:41:22 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.201 15:41:22 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.201 15:41:22 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.201 15:41:22 -- paths/export.sh@5 -- # export PATH 00:22:52.201 15:41:22 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:52.201 15:41:22 -- nvmf/common.sh@47 -- # : 0 00:22:52.201 15:41:22 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:52.201 15:41:22 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:52.201 15:41:22 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:52.201 15:41:22 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:52.201 15:41:22 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:52.201 15:41:22 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:52.201 15:41:22 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:52.201 15:41:22 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:52.201 15:41:22 -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:52.201 15:41:22 -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:52.201 15:41:22 -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:22:52.201 15:41:22 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:52.201 15:41:22 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:52.201 15:41:22 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:52.201 15:41:22 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:52.201 15:41:22 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:52.201 15:41:22 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:52.201 15:41:22 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:52.201 15:41:22 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:52.201 15:41:22 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:22:52.201 15:41:22 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:22:52.201 15:41:22 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:22:52.201 15:41:22 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:22:52.201 15:41:22 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:22:52.201 15:41:22 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:22:52.201 15:41:22 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:52.201 15:41:22 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:52.201 15:41:22 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:52.201 15:41:22 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:52.201 15:41:22 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:52.201 15:41:22 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:52.201 15:41:22 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:52.201 15:41:22 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:52.201 15:41:22 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:52.201 15:41:22 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:52.201 15:41:22 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:52.201 15:41:22 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:52.201 15:41:22 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:52.201 15:41:22 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:52.201 Cannot find device "nvmf_tgt_br" 00:22:52.201 15:41:22 -- nvmf/common.sh@155 -- # true 00:22:52.201 15:41:22 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:52.201 Cannot find device "nvmf_tgt_br2" 00:22:52.201 15:41:22 -- nvmf/common.sh@156 -- # true 00:22:52.201 15:41:22 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:52.201 15:41:22 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:52.201 Cannot find device "nvmf_tgt_br" 00:22:52.201 15:41:22 -- nvmf/common.sh@158 -- # true 00:22:52.201 15:41:22 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:52.459 Cannot find device "nvmf_tgt_br2" 00:22:52.459 15:41:22 -- nvmf/common.sh@159 -- # true 00:22:52.459 15:41:22 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:52.459 15:41:22 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:52.459 15:41:22 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:52.459 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:52.459 15:41:22 -- nvmf/common.sh@162 -- # true 00:22:52.459 15:41:22 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:52.459 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:52.459 15:41:22 -- nvmf/common.sh@163 -- # true 00:22:52.459 15:41:22 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:52.459 15:41:22 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:52.459 15:41:22 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:52.459 15:41:22 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:52.459 15:41:22 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:52.459 15:41:22 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:52.459 15:41:22 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:52.459 15:41:22 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:52.459 15:41:22 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:52.459 15:41:22 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:52.459 15:41:22 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:52.459 15:41:22 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:52.459 15:41:22 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:52.459 15:41:22 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:52.459 15:41:22 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:52.459 15:41:22 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:52.459 15:41:22 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:52.459 15:41:22 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:52.459 15:41:22 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:52.459 15:41:22 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:52.459 15:41:22 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:52.717 15:41:22 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:52.717 15:41:22 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:52.717 15:41:22 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:52.717 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:52.717 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.106 ms 00:22:52.717 00:22:52.717 --- 10.0.0.2 ping statistics --- 00:22:52.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:52.717 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:22:52.717 15:41:22 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:52.717 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:52.717 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.070 ms 00:22:52.717 00:22:52.717 --- 10.0.0.3 ping statistics --- 00:22:52.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:52.717 rtt min/avg/max/mdev = 0.070/0.070/0.070/0.000 ms 00:22:52.717 15:41:22 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:52.717 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:52.717 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.033 ms 00:22:52.717 00:22:52.717 --- 10.0.0.1 ping statistics --- 00:22:52.717 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:52.717 rtt min/avg/max/mdev = 0.033/0.033/0.033/0.000 ms 00:22:52.717 15:41:22 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:52.717 15:41:22 -- nvmf/common.sh@422 -- # return 0 00:22:52.717 15:41:22 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:52.717 15:41:22 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:52.717 15:41:22 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:52.717 15:41:22 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:52.717 15:41:22 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:52.717 15:41:22 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:52.717 15:41:22 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:52.717 15:41:22 -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:22:52.717 15:41:22 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:52.717 15:41:22 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:52.717 15:41:22 -- common/autotest_common.sh@10 -- # set +x 00:22:52.717 15:41:22 -- nvmf/common.sh@470 -- # nvmfpid=73494 00:22:52.717 15:41:22 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:22:52.717 15:41:22 -- nvmf/common.sh@471 -- # waitforlisten 73494 00:22:52.717 15:41:22 -- common/autotest_common.sh@817 -- # '[' -z 73494 ']' 00:22:52.717 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:52.717 15:41:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:52.717 15:41:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:52.717 15:41:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:52.717 15:41:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:52.717 15:41:22 -- common/autotest_common.sh@10 -- # set +x 00:22:52.717 [2024-04-26 15:41:22.871239] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:22:52.717 [2024-04-26 15:41:22.871366] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:52.976 [2024-04-26 15:41:23.015084] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:52.976 [2024-04-26 15:41:23.183646] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:52.976 [2024-04-26 15:41:23.184030] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:52.976 [2024-04-26 15:41:23.184451] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:52.976 [2024-04-26 15:41:23.184663] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:52.976 [2024-04-26 15:41:23.184846] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:52.976 [2024-04-26 15:41:23.185059] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:52.976 [2024-04-26 15:41:23.185263] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:22:52.976 [2024-04-26 15:41:23.185268] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:52.976 [2024-04-26 15:41:23.185204] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:22:53.909 15:41:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:53.909 15:41:23 -- common/autotest_common.sh@850 -- # return 0 00:22:53.909 15:41:23 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:53.909 15:41:23 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:53.909 15:41:23 -- common/autotest_common.sh@10 -- # set +x 00:22:53.909 15:41:23 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:53.909 15:41:23 -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:22:53.909 15:41:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:53.909 15:41:23 -- common/autotest_common.sh@10 -- # set +x 00:22:53.909 15:41:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:53.909 15:41:23 -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:22:53.909 15:41:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:53.909 15:41:23 -- common/autotest_common.sh@10 -- # set +x 00:22:53.909 15:41:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:53.909 15:41:23 -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:53.909 15:41:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:53.909 15:41:23 -- common/autotest_common.sh@10 -- # set +x 00:22:53.909 [2024-04-26 15:41:24.003195] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:53.909 15:41:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:53.909 15:41:24 -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:53.909 15:41:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:53.909 15:41:24 -- common/autotest_common.sh@10 -- # set +x 00:22:53.909 Malloc0 00:22:53.909 15:41:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:53.909 15:41:24 -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:53.909 15:41:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:53.909 15:41:24 -- common/autotest_common.sh@10 -- # set +x 00:22:53.909 15:41:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:53.909 15:41:24 -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:53.909 15:41:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:53.909 15:41:24 -- common/autotest_common.sh@10 -- # set +x 00:22:53.909 15:41:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:53.909 15:41:24 -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:53.909 15:41:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:53.910 15:41:24 -- common/autotest_common.sh@10 -- # set +x 00:22:53.910 [2024-04-26 15:41:24.075350] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:53.910 15:41:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:53.910 15:41:24 -- target/bdev_io_wait.sh@28 -- # WRITE_PID=73555 00:22:53.910 15:41:24 -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:22:53.910 15:41:24 -- target/bdev_io_wait.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:22:53.910 15:41:24 -- nvmf/common.sh@521 -- # config=() 00:22:53.910 15:41:24 -- nvmf/common.sh@521 -- # local subsystem config 00:22:53.910 15:41:24 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:53.910 15:41:24 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:53.910 { 00:22:53.910 "params": { 00:22:53.910 "name": "Nvme$subsystem", 00:22:53.910 "trtype": "$TEST_TRANSPORT", 00:22:53.910 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:53.910 "adrfam": "ipv4", 00:22:53.910 "trsvcid": "$NVMF_PORT", 00:22:53.910 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:53.910 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:53.910 "hdgst": ${hdgst:-false}, 00:22:53.910 "ddgst": ${ddgst:-false} 00:22:53.910 }, 00:22:53.910 "method": "bdev_nvme_attach_controller" 00:22:53.910 } 00:22:53.910 EOF 00:22:53.910 )") 00:22:53.910 15:41:24 -- target/bdev_io_wait.sh@30 -- # READ_PID=73557 00:22:53.910 15:41:24 -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:22:53.910 15:41:24 -- target/bdev_io_wait.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:22:53.910 15:41:24 -- nvmf/common.sh@521 -- # config=() 00:22:53.910 15:41:24 -- nvmf/common.sh@521 -- # local subsystem config 00:22:53.910 15:41:24 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:53.910 15:41:24 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:53.910 { 00:22:53.910 "params": { 00:22:53.910 "name": "Nvme$subsystem", 00:22:53.910 "trtype": "$TEST_TRANSPORT", 00:22:53.910 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:53.910 "adrfam": "ipv4", 00:22:53.910 "trsvcid": "$NVMF_PORT", 00:22:53.910 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:53.910 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:53.910 "hdgst": ${hdgst:-false}, 00:22:53.910 "ddgst": ${ddgst:-false} 00:22:53.910 }, 00:22:53.910 "method": "bdev_nvme_attach_controller" 00:22:53.910 } 00:22:53.910 EOF 00:22:53.910 )") 00:22:53.910 15:41:24 -- nvmf/common.sh@543 -- # cat 00:22:53.910 15:41:24 -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=73560 00:22:53.910 15:41:24 -- target/bdev_io_wait.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:22:53.910 15:41:24 -- nvmf/common.sh@543 -- # cat 00:22:53.910 15:41:24 -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=73564 00:22:53.910 15:41:24 -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:22:53.910 15:41:24 -- nvmf/common.sh@521 -- # config=() 00:22:53.910 15:41:24 -- nvmf/common.sh@521 -- # local subsystem config 00:22:53.910 15:41:24 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:53.910 15:41:24 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:53.910 { 00:22:53.910 "params": { 00:22:53.910 "name": "Nvme$subsystem", 00:22:53.910 "trtype": "$TEST_TRANSPORT", 00:22:53.910 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:53.910 "adrfam": "ipv4", 00:22:53.910 "trsvcid": "$NVMF_PORT", 00:22:53.910 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:53.910 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:53.910 "hdgst": ${hdgst:-false}, 00:22:53.910 "ddgst": ${ddgst:-false} 00:22:53.910 }, 00:22:53.910 "method": "bdev_nvme_attach_controller" 00:22:53.910 } 00:22:53.910 EOF 00:22:53.910 )") 00:22:53.910 15:41:24 -- nvmf/common.sh@545 -- # jq . 00:22:53.910 15:41:24 -- target/bdev_io_wait.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:22:53.910 15:41:24 -- nvmf/common.sh@543 -- # cat 00:22:53.910 15:41:24 -- nvmf/common.sh@545 -- # jq . 00:22:53.910 15:41:24 -- target/bdev_io_wait.sh@35 -- # sync 00:22:53.910 15:41:24 -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:22:53.910 15:41:24 -- nvmf/common.sh@521 -- # config=() 00:22:53.910 15:41:24 -- nvmf/common.sh@521 -- # local subsystem config 00:22:53.910 15:41:24 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:22:53.910 15:41:24 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:22:53.910 { 00:22:53.910 "params": { 00:22:53.910 "name": "Nvme$subsystem", 00:22:53.910 "trtype": "$TEST_TRANSPORT", 00:22:53.910 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:53.910 "adrfam": "ipv4", 00:22:53.910 "trsvcid": "$NVMF_PORT", 00:22:53.910 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:53.910 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:53.910 "hdgst": ${hdgst:-false}, 00:22:53.910 "ddgst": ${ddgst:-false} 00:22:53.910 }, 00:22:53.910 "method": "bdev_nvme_attach_controller" 00:22:53.910 } 00:22:53.910 EOF 00:22:53.910 )") 00:22:53.910 15:41:24 -- nvmf/common.sh@546 -- # IFS=, 00:22:53.910 15:41:24 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:22:53.910 "params": { 00:22:53.910 "name": "Nvme1", 00:22:53.910 "trtype": "tcp", 00:22:53.910 "traddr": "10.0.0.2", 00:22:53.910 "adrfam": "ipv4", 00:22:53.910 "trsvcid": "4420", 00:22:53.910 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:53.910 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:53.910 "hdgst": false, 00:22:53.910 "ddgst": false 00:22:53.910 }, 00:22:53.910 "method": "bdev_nvme_attach_controller" 00:22:53.910 }' 00:22:53.910 15:41:24 -- nvmf/common.sh@543 -- # cat 00:22:53.910 15:41:24 -- nvmf/common.sh@546 -- # IFS=, 00:22:53.910 15:41:24 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:22:53.910 "params": { 00:22:53.910 "name": "Nvme1", 00:22:53.910 "trtype": "tcp", 00:22:53.910 "traddr": "10.0.0.2", 00:22:53.910 "adrfam": "ipv4", 00:22:53.910 "trsvcid": "4420", 00:22:53.910 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:53.910 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:53.910 "hdgst": false, 00:22:53.910 "ddgst": false 00:22:53.910 }, 00:22:53.910 "method": "bdev_nvme_attach_controller" 00:22:53.910 }' 00:22:53.910 15:41:24 -- nvmf/common.sh@545 -- # jq . 00:22:53.910 15:41:24 -- nvmf/common.sh@545 -- # jq . 00:22:53.910 15:41:24 -- nvmf/common.sh@546 -- # IFS=, 00:22:53.910 15:41:24 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:22:53.910 "params": { 00:22:53.910 "name": "Nvme1", 00:22:53.910 "trtype": "tcp", 00:22:53.910 "traddr": "10.0.0.2", 00:22:53.910 "adrfam": "ipv4", 00:22:53.910 "trsvcid": "4420", 00:22:53.910 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:53.910 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:53.910 "hdgst": false, 00:22:53.910 "ddgst": false 00:22:53.910 }, 00:22:53.910 "method": "bdev_nvme_attach_controller" 00:22:53.910 }' 00:22:53.910 15:41:24 -- nvmf/common.sh@546 -- # IFS=, 00:22:53.910 15:41:24 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:22:53.910 "params": { 00:22:53.910 "name": "Nvme1", 00:22:53.910 "trtype": "tcp", 00:22:53.910 "traddr": "10.0.0.2", 00:22:53.910 "adrfam": "ipv4", 00:22:53.910 "trsvcid": "4420", 00:22:53.910 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:53.910 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:53.910 "hdgst": false, 00:22:53.910 "ddgst": false 00:22:53.910 }, 00:22:53.910 "method": "bdev_nvme_attach_controller" 00:22:53.910 }' 00:22:53.910 [2024-04-26 15:41:24.140764] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:22:53.910 [2024-04-26 15:41:24.141109] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:22:53.910 15:41:24 -- target/bdev_io_wait.sh@37 -- # wait 73555 00:22:53.910 [2024-04-26 15:41:24.161889] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:22:53.910 [2024-04-26 15:41:24.162027] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:22:53.910 [2024-04-26 15:41:24.181751] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:22:53.910 [2024-04-26 15:41:24.182205] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:22:53.910 [2024-04-26 15:41:24.186493] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:22:53.910 [2024-04-26 15:41:24.187479] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:22:54.169 [2024-04-26 15:41:24.355608] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.169 [2024-04-26 15:41:24.418601] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.169 [2024-04-26 15:41:24.453663] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 7 00:22:54.428 [2024-04-26 15:41:24.499592] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.428 [2024-04-26 15:41:24.520439] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:22:54.428 Running I/O for 1 seconds... 00:22:54.428 [2024-04-26 15:41:24.610088] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:22:54.428 [2024-04-26 15:41:24.613421] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.428 Running I/O for 1 seconds... 00:22:54.685 [2024-04-26 15:41:24.736016] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:22:54.685 Running I/O for 1 seconds... 00:22:54.685 Running I/O for 1 seconds... 00:22:55.617 00:22:55.617 Latency(us) 00:22:55.617 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:55.617 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:22:55.617 Nvme1n1 : 1.03 5464.26 21.34 0.00 0.00 23175.59 5719.51 50283.99 00:22:55.617 =================================================================================================================== 00:22:55.617 Total : 5464.26 21.34 0.00 0.00 23175.59 5719.51 50283.99 00:22:55.617 00:22:55.617 Latency(us) 00:22:55.617 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:55.617 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:22:55.617 Nvme1n1 : 1.00 199638.12 779.84 0.00 0.00 638.70 262.52 990.49 00:22:55.617 =================================================================================================================== 00:22:55.617 Total : 199638.12 779.84 0.00 0.00 638.70 262.52 990.49 00:22:55.617 00:22:55.617 Latency(us) 00:22:55.617 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:55.617 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:22:55.617 Nvme1n1 : 1.02 5592.53 21.85 0.00 0.00 22691.05 12690.15 34317.03 00:22:55.617 =================================================================================================================== 00:22:55.617 Total : 5592.53 21.85 0.00 0.00 22691.05 12690.15 34317.03 00:22:55.617 00:22:55.617 Latency(us) 00:22:55.617 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:55.617 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:22:55.617 Nvme1n1 : 1.01 5320.40 20.78 0.00 0.00 23967.00 6702.55 58624.93 00:22:55.617 =================================================================================================================== 00:22:55.617 Total : 5320.40 20.78 0.00 0.00 23967.00 6702.55 58624.93 00:22:56.184 15:41:26 -- target/bdev_io_wait.sh@38 -- # wait 73557 00:22:56.184 15:41:26 -- target/bdev_io_wait.sh@39 -- # wait 73560 00:22:56.184 15:41:26 -- target/bdev_io_wait.sh@40 -- # wait 73564 00:22:56.184 15:41:26 -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:56.184 15:41:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:56.184 15:41:26 -- common/autotest_common.sh@10 -- # set +x 00:22:56.184 15:41:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:56.184 15:41:26 -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:22:56.184 15:41:26 -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:22:56.184 15:41:26 -- nvmf/common.sh@477 -- # nvmfcleanup 00:22:56.184 15:41:26 -- nvmf/common.sh@117 -- # sync 00:22:56.184 15:41:26 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:56.184 15:41:26 -- nvmf/common.sh@120 -- # set +e 00:22:56.184 15:41:26 -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:56.184 15:41:26 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:56.184 rmmod nvme_tcp 00:22:56.184 rmmod nvme_fabrics 00:22:56.184 rmmod nvme_keyring 00:22:56.184 15:41:26 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:56.184 15:41:26 -- nvmf/common.sh@124 -- # set -e 00:22:56.184 15:41:26 -- nvmf/common.sh@125 -- # return 0 00:22:56.184 15:41:26 -- nvmf/common.sh@478 -- # '[' -n 73494 ']' 00:22:56.184 15:41:26 -- nvmf/common.sh@479 -- # killprocess 73494 00:22:56.184 15:41:26 -- common/autotest_common.sh@936 -- # '[' -z 73494 ']' 00:22:56.184 15:41:26 -- common/autotest_common.sh@940 -- # kill -0 73494 00:22:56.184 15:41:26 -- common/autotest_common.sh@941 -- # uname 00:22:56.184 15:41:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:22:56.184 15:41:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73494 00:22:56.184 killing process with pid 73494 00:22:56.184 15:41:26 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:22:56.184 15:41:26 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:22:56.184 15:41:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73494' 00:22:56.184 15:41:26 -- common/autotest_common.sh@955 -- # kill 73494 00:22:56.184 15:41:26 -- common/autotest_common.sh@960 -- # wait 73494 00:22:56.443 15:41:26 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:22:56.443 15:41:26 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:22:56.443 15:41:26 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:22:56.443 15:41:26 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:56.443 15:41:26 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:56.443 15:41:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:56.443 15:41:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:56.443 15:41:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:56.700 15:41:26 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:22:56.700 00:22:56.700 real 0m4.452s 00:22:56.700 user 0m19.418s 00:22:56.700 sys 0m2.008s 00:22:56.700 ************************************ 00:22:56.700 END TEST nvmf_bdev_io_wait 00:22:56.700 ************************************ 00:22:56.700 15:41:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:22:56.700 15:41:26 -- common/autotest_common.sh@10 -- # set +x 00:22:56.700 15:41:26 -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:22:56.700 15:41:26 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:22:56.700 15:41:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:22:56.700 15:41:26 -- common/autotest_common.sh@10 -- # set +x 00:22:56.700 ************************************ 00:22:56.700 START TEST nvmf_queue_depth 00:22:56.700 ************************************ 00:22:56.700 15:41:26 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:22:56.700 * Looking for test storage... 00:22:56.700 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:22:56.700 15:41:26 -- target/queue_depth.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:22:56.700 15:41:26 -- nvmf/common.sh@7 -- # uname -s 00:22:56.700 15:41:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:56.700 15:41:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:56.700 15:41:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:56.700 15:41:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:56.700 15:41:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:56.700 15:41:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:56.700 15:41:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:56.700 15:41:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:56.700 15:41:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:56.700 15:41:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:56.700 15:41:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:22:56.701 15:41:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:22:56.701 15:41:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:56.701 15:41:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:56.701 15:41:26 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:22:56.701 15:41:26 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:56.701 15:41:26 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:22:56.701 15:41:26 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:56.701 15:41:26 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:56.701 15:41:26 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:56.701 15:41:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.701 15:41:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.701 15:41:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.701 15:41:26 -- paths/export.sh@5 -- # export PATH 00:22:56.701 15:41:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:56.701 15:41:26 -- nvmf/common.sh@47 -- # : 0 00:22:56.701 15:41:26 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:56.701 15:41:26 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:56.701 15:41:26 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:56.701 15:41:26 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:56.701 15:41:26 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:56.701 15:41:26 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:56.701 15:41:26 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:56.701 15:41:26 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:56.959 15:41:26 -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:22:56.959 15:41:26 -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:22:56.959 15:41:26 -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:56.959 15:41:26 -- target/queue_depth.sh@19 -- # nvmftestinit 00:22:56.959 15:41:26 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:22:56.959 15:41:26 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:56.959 15:41:26 -- nvmf/common.sh@437 -- # prepare_net_devs 00:22:56.959 15:41:26 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:22:56.959 15:41:26 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:22:56.959 15:41:26 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:56.959 15:41:26 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:56.959 15:41:26 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:56.959 15:41:27 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:22:56.959 15:41:27 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:22:56.959 15:41:27 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:22:56.959 15:41:27 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:22:56.959 15:41:27 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:22:56.959 15:41:27 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:22:56.959 15:41:27 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:56.959 15:41:27 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:56.959 15:41:27 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:22:56.959 15:41:27 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:22:56.959 15:41:27 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:22:56.959 15:41:27 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:22:56.959 15:41:27 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:22:56.959 15:41:27 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:56.959 15:41:27 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:22:56.959 15:41:27 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:22:56.959 15:41:27 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:22:56.959 15:41:27 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:22:56.959 15:41:27 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:22:56.959 15:41:27 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:22:56.959 Cannot find device "nvmf_tgt_br" 00:22:56.959 15:41:27 -- nvmf/common.sh@155 -- # true 00:22:56.959 15:41:27 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:22:56.959 Cannot find device "nvmf_tgt_br2" 00:22:56.959 15:41:27 -- nvmf/common.sh@156 -- # true 00:22:56.959 15:41:27 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:22:56.959 15:41:27 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:22:56.959 Cannot find device "nvmf_tgt_br" 00:22:56.959 15:41:27 -- nvmf/common.sh@158 -- # true 00:22:56.959 15:41:27 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:22:56.959 Cannot find device "nvmf_tgt_br2" 00:22:56.959 15:41:27 -- nvmf/common.sh@159 -- # true 00:22:56.959 15:41:27 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:22:56.959 15:41:27 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:22:56.959 15:41:27 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:22:56.959 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:56.959 15:41:27 -- nvmf/common.sh@162 -- # true 00:22:56.959 15:41:27 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:22:56.959 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:22:56.959 15:41:27 -- nvmf/common.sh@163 -- # true 00:22:56.959 15:41:27 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:22:56.959 15:41:27 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:22:56.959 15:41:27 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:22:56.959 15:41:27 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:22:56.959 15:41:27 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:22:56.959 15:41:27 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:22:56.959 15:41:27 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:22:56.959 15:41:27 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:22:56.959 15:41:27 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:22:56.959 15:41:27 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:22:56.959 15:41:27 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:22:56.959 15:41:27 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:22:56.959 15:41:27 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:22:56.959 15:41:27 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:22:57.253 15:41:27 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:22:57.253 15:41:27 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:22:57.253 15:41:27 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:22:57.253 15:41:27 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:22:57.253 15:41:27 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:22:57.253 15:41:27 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:22:57.253 15:41:27 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:22:57.253 15:41:27 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:22:57.253 15:41:27 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:22:57.253 15:41:27 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:22:57.253 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:57.253 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.104 ms 00:22:57.253 00:22:57.253 --- 10.0.0.2 ping statistics --- 00:22:57.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:57.253 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:22:57.253 15:41:27 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:22:57.253 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:22:57.253 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.095 ms 00:22:57.253 00:22:57.253 --- 10.0.0.3 ping statistics --- 00:22:57.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:57.253 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:22:57.253 15:41:27 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:22:57.253 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:57.253 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:22:57.253 00:22:57.253 --- 10.0.0.1 ping statistics --- 00:22:57.253 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:57.253 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:22:57.253 15:41:27 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:57.253 15:41:27 -- nvmf/common.sh@422 -- # return 0 00:22:57.253 15:41:27 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:22:57.253 15:41:27 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:57.253 15:41:27 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:22:57.253 15:41:27 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:22:57.253 15:41:27 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:57.253 15:41:27 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:22:57.253 15:41:27 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:22:57.253 15:41:27 -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:22:57.253 15:41:27 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:22:57.253 15:41:27 -- common/autotest_common.sh@710 -- # xtrace_disable 00:22:57.253 15:41:27 -- common/autotest_common.sh@10 -- # set +x 00:22:57.253 15:41:27 -- nvmf/common.sh@470 -- # nvmfpid=73798 00:22:57.253 15:41:27 -- nvmf/common.sh@471 -- # waitforlisten 73798 00:22:57.253 15:41:27 -- common/autotest_common.sh@817 -- # '[' -z 73798 ']' 00:22:57.253 15:41:27 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:57.253 15:41:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:57.253 15:41:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:57.253 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:57.253 15:41:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:57.253 15:41:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:57.253 15:41:27 -- common/autotest_common.sh@10 -- # set +x 00:22:57.253 [2024-04-26 15:41:27.444094] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:22:57.253 [2024-04-26 15:41:27.444214] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:57.532 [2024-04-26 15:41:27.582383] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:57.532 [2024-04-26 15:41:27.734956] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:57.532 [2024-04-26 15:41:27.735031] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:57.532 [2024-04-26 15:41:27.735044] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:57.532 [2024-04-26 15:41:27.735053] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:57.532 [2024-04-26 15:41:27.735061] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:57.532 [2024-04-26 15:41:27.735105] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:22:58.466 15:41:28 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:58.466 15:41:28 -- common/autotest_common.sh@850 -- # return 0 00:22:58.466 15:41:28 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:22:58.466 15:41:28 -- common/autotest_common.sh@716 -- # xtrace_disable 00:22:58.466 15:41:28 -- common/autotest_common.sh@10 -- # set +x 00:22:58.466 15:41:28 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:58.466 15:41:28 -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:58.466 15:41:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:58.466 15:41:28 -- common/autotest_common.sh@10 -- # set +x 00:22:58.466 [2024-04-26 15:41:28.487251] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:58.466 15:41:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:58.466 15:41:28 -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:58.466 15:41:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:58.466 15:41:28 -- common/autotest_common.sh@10 -- # set +x 00:22:58.466 Malloc0 00:22:58.466 15:41:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:58.466 15:41:28 -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:58.466 15:41:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:58.466 15:41:28 -- common/autotest_common.sh@10 -- # set +x 00:22:58.466 15:41:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:58.466 15:41:28 -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:58.466 15:41:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:58.466 15:41:28 -- common/autotest_common.sh@10 -- # set +x 00:22:58.466 15:41:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:58.466 15:41:28 -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:58.466 15:41:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:58.466 15:41:28 -- common/autotest_common.sh@10 -- # set +x 00:22:58.466 [2024-04-26 15:41:28.550216] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:58.466 15:41:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:58.466 15:41:28 -- target/queue_depth.sh@30 -- # bdevperf_pid=73848 00:22:58.466 15:41:28 -- target/queue_depth.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:22:58.466 15:41:28 -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:58.466 15:41:28 -- target/queue_depth.sh@33 -- # waitforlisten 73848 /var/tmp/bdevperf.sock 00:22:58.466 15:41:28 -- common/autotest_common.sh@817 -- # '[' -z 73848 ']' 00:22:58.466 15:41:28 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:58.466 15:41:28 -- common/autotest_common.sh@822 -- # local max_retries=100 00:22:58.466 15:41:28 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:58.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:58.466 15:41:28 -- common/autotest_common.sh@826 -- # xtrace_disable 00:22:58.466 15:41:28 -- common/autotest_common.sh@10 -- # set +x 00:22:58.466 [2024-04-26 15:41:28.611755] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:22:58.466 [2024-04-26 15:41:28.611856] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73848 ] 00:22:58.466 [2024-04-26 15:41:28.749230] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:58.723 [2024-04-26 15:41:28.908913] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:22:59.287 15:41:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:22:59.287 15:41:29 -- common/autotest_common.sh@850 -- # return 0 00:22:59.287 15:41:29 -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:59.287 15:41:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:22:59.287 15:41:29 -- common/autotest_common.sh@10 -- # set +x 00:22:59.544 NVMe0n1 00:22:59.544 15:41:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:22:59.544 15:41:29 -- target/queue_depth.sh@35 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:59.544 Running I/O for 10 seconds... 00:23:11.735 00:23:11.735 Latency(us) 00:23:11.735 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:11.735 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:23:11.735 Verification LBA range: start 0x0 length 0x4000 00:23:11.735 NVMe0n1 : 10.06 8441.57 32.97 0.00 0.00 120786.65 12988.04 109147.23 00:23:11.735 =================================================================================================================== 00:23:11.735 Total : 8441.57 32.97 0.00 0.00 120786.65 12988.04 109147.23 00:23:11.735 0 00:23:11.735 15:41:39 -- target/queue_depth.sh@39 -- # killprocess 73848 00:23:11.735 15:41:39 -- common/autotest_common.sh@936 -- # '[' -z 73848 ']' 00:23:11.735 15:41:39 -- common/autotest_common.sh@940 -- # kill -0 73848 00:23:11.735 15:41:39 -- common/autotest_common.sh@941 -- # uname 00:23:11.735 15:41:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:11.735 15:41:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73848 00:23:11.735 15:41:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:11.735 killing process with pid 73848 00:23:11.735 Received shutdown signal, test time was about 10.000000 seconds 00:23:11.735 00:23:11.735 Latency(us) 00:23:11.735 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:11.735 =================================================================================================================== 00:23:11.735 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:11.735 15:41:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:11.735 15:41:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73848' 00:23:11.735 15:41:39 -- common/autotest_common.sh@955 -- # kill 73848 00:23:11.735 15:41:39 -- common/autotest_common.sh@960 -- # wait 73848 00:23:11.735 15:41:40 -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:23:11.735 15:41:40 -- target/queue_depth.sh@43 -- # nvmftestfini 00:23:11.735 15:41:40 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:11.735 15:41:40 -- nvmf/common.sh@117 -- # sync 00:23:11.735 15:41:40 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:11.735 15:41:40 -- nvmf/common.sh@120 -- # set +e 00:23:11.735 15:41:40 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:11.735 15:41:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:11.735 rmmod nvme_tcp 00:23:11.735 rmmod nvme_fabrics 00:23:11.735 rmmod nvme_keyring 00:23:11.735 15:41:40 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:11.735 15:41:40 -- nvmf/common.sh@124 -- # set -e 00:23:11.735 15:41:40 -- nvmf/common.sh@125 -- # return 0 00:23:11.735 15:41:40 -- nvmf/common.sh@478 -- # '[' -n 73798 ']' 00:23:11.735 15:41:40 -- nvmf/common.sh@479 -- # killprocess 73798 00:23:11.735 15:41:40 -- common/autotest_common.sh@936 -- # '[' -z 73798 ']' 00:23:11.735 15:41:40 -- common/autotest_common.sh@940 -- # kill -0 73798 00:23:11.735 15:41:40 -- common/autotest_common.sh@941 -- # uname 00:23:11.735 15:41:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:11.735 15:41:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 73798 00:23:11.735 killing process with pid 73798 00:23:11.735 15:41:40 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:11.735 15:41:40 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:11.735 15:41:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 73798' 00:23:11.735 15:41:40 -- common/autotest_common.sh@955 -- # kill 73798 00:23:11.735 15:41:40 -- common/autotest_common.sh@960 -- # wait 73798 00:23:11.735 15:41:40 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:11.735 15:41:40 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:11.735 15:41:40 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:11.735 15:41:40 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:11.735 15:41:40 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:11.735 15:41:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:11.735 15:41:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:11.735 15:41:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:11.735 15:41:40 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:11.735 00:23:11.735 real 0m13.681s 00:23:11.735 user 0m23.014s 00:23:11.735 sys 0m2.460s 00:23:11.735 15:41:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:11.735 15:41:40 -- common/autotest_common.sh@10 -- # set +x 00:23:11.735 ************************************ 00:23:11.735 END TEST nvmf_queue_depth 00:23:11.735 ************************************ 00:23:11.735 15:41:40 -- nvmf/nvmf.sh@52 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:23:11.735 15:41:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:11.735 15:41:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:11.735 15:41:40 -- common/autotest_common.sh@10 -- # set +x 00:23:11.735 ************************************ 00:23:11.735 START TEST nvmf_multipath 00:23:11.736 ************************************ 00:23:11.736 15:41:40 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:23:11.736 * Looking for test storage... 00:23:11.736 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:11.736 15:41:40 -- target/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:11.736 15:41:40 -- nvmf/common.sh@7 -- # uname -s 00:23:11.736 15:41:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:11.736 15:41:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:11.736 15:41:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:11.736 15:41:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:11.736 15:41:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:11.736 15:41:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:11.736 15:41:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:11.736 15:41:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:11.736 15:41:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:11.736 15:41:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:11.736 15:41:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:23:11.736 15:41:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:23:11.736 15:41:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:11.736 15:41:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:11.736 15:41:40 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:11.736 15:41:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:11.736 15:41:40 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:11.736 15:41:40 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:11.736 15:41:40 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:11.736 15:41:40 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:11.736 15:41:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.736 15:41:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.736 15:41:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.736 15:41:40 -- paths/export.sh@5 -- # export PATH 00:23:11.736 15:41:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:11.736 15:41:40 -- nvmf/common.sh@47 -- # : 0 00:23:11.736 15:41:40 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:11.736 15:41:40 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:11.736 15:41:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:11.736 15:41:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:11.736 15:41:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:11.736 15:41:40 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:11.736 15:41:40 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:11.736 15:41:40 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:11.736 15:41:40 -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:11.736 15:41:40 -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:11.736 15:41:40 -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:23:11.736 15:41:40 -- target/multipath.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:23:11.736 15:41:40 -- target/multipath.sh@43 -- # nvmftestinit 00:23:11.736 15:41:40 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:11.736 15:41:40 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:11.736 15:41:40 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:11.736 15:41:40 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:11.736 15:41:40 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:11.736 15:41:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:11.736 15:41:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:11.736 15:41:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:11.736 15:41:40 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:23:11.736 15:41:40 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:23:11.736 15:41:40 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:23:11.736 15:41:40 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:23:11.736 15:41:40 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:23:11.736 15:41:40 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:23:11.736 15:41:40 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:11.736 15:41:40 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:11.736 15:41:40 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:11.736 15:41:40 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:11.736 15:41:40 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:11.736 15:41:40 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:11.736 15:41:40 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:11.736 15:41:40 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:11.736 15:41:40 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:11.736 15:41:40 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:11.736 15:41:40 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:11.736 15:41:40 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:11.736 15:41:40 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:11.736 15:41:40 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:11.736 Cannot find device "nvmf_tgt_br" 00:23:11.736 15:41:40 -- nvmf/common.sh@155 -- # true 00:23:11.736 15:41:40 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:11.736 Cannot find device "nvmf_tgt_br2" 00:23:11.736 15:41:40 -- nvmf/common.sh@156 -- # true 00:23:11.736 15:41:40 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:11.736 15:41:40 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:11.736 Cannot find device "nvmf_tgt_br" 00:23:11.736 15:41:40 -- nvmf/common.sh@158 -- # true 00:23:11.736 15:41:40 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:11.736 Cannot find device "nvmf_tgt_br2" 00:23:11.736 15:41:40 -- nvmf/common.sh@159 -- # true 00:23:11.736 15:41:40 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:11.736 15:41:40 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:11.736 15:41:40 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:11.736 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:11.736 15:41:40 -- nvmf/common.sh@162 -- # true 00:23:11.736 15:41:40 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:11.736 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:11.736 15:41:40 -- nvmf/common.sh@163 -- # true 00:23:11.736 15:41:40 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:11.736 15:41:40 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:11.736 15:41:40 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:11.736 15:41:40 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:11.736 15:41:40 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:11.736 15:41:40 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:11.736 15:41:41 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:11.736 15:41:41 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:11.736 15:41:41 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:11.736 15:41:41 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:11.736 15:41:41 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:11.736 15:41:41 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:11.736 15:41:41 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:11.736 15:41:41 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:11.736 15:41:41 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:11.736 15:41:41 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:11.736 15:41:41 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:11.736 15:41:41 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:11.736 15:41:41 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:11.736 15:41:41 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:11.736 15:41:41 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:11.736 15:41:41 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:11.736 15:41:41 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:11.736 15:41:41 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:11.736 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:11.736 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:23:11.736 00:23:11.736 --- 10.0.0.2 ping statistics --- 00:23:11.736 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:11.736 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:23:11.736 15:41:41 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:11.736 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:11.737 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.085 ms 00:23:11.737 00:23:11.737 --- 10.0.0.3 ping statistics --- 00:23:11.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:11.737 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:23:11.737 15:41:41 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:11.737 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:11.737 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.038 ms 00:23:11.737 00:23:11.737 --- 10.0.0.1 ping statistics --- 00:23:11.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:11.737 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:23:11.737 15:41:41 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:11.737 15:41:41 -- nvmf/common.sh@422 -- # return 0 00:23:11.737 15:41:41 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:11.737 15:41:41 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:11.737 15:41:41 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:11.737 15:41:41 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:11.737 15:41:41 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:11.737 15:41:41 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:11.737 15:41:41 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:11.737 15:41:41 -- target/multipath.sh@45 -- # '[' -z 10.0.0.3 ']' 00:23:11.737 15:41:41 -- target/multipath.sh@51 -- # '[' tcp '!=' tcp ']' 00:23:11.737 15:41:41 -- target/multipath.sh@57 -- # nvmfappstart -m 0xF 00:23:11.737 15:41:41 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:11.737 15:41:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:11.737 15:41:41 -- common/autotest_common.sh@10 -- # set +x 00:23:11.737 15:41:41 -- nvmf/common.sh@470 -- # nvmfpid=74190 00:23:11.737 15:41:41 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:11.737 15:41:41 -- nvmf/common.sh@471 -- # waitforlisten 74190 00:23:11.737 15:41:41 -- common/autotest_common.sh@817 -- # '[' -z 74190 ']' 00:23:11.737 15:41:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:11.737 15:41:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:11.737 15:41:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:11.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:11.737 15:41:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:11.737 15:41:41 -- common/autotest_common.sh@10 -- # set +x 00:23:11.737 [2024-04-26 15:41:41.229092] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:23:11.737 [2024-04-26 15:41:41.229213] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:11.737 [2024-04-26 15:41:41.371224] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:11.737 [2024-04-26 15:41:41.492057] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:11.737 [2024-04-26 15:41:41.492118] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:11.737 [2024-04-26 15:41:41.492131] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:11.737 [2024-04-26 15:41:41.492160] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:11.737 [2024-04-26 15:41:41.492169] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:11.737 [2024-04-26 15:41:41.492246] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:11.737 [2024-04-26 15:41:41.492376] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:11.737 [2024-04-26 15:41:41.493225] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:11.737 [2024-04-26 15:41:41.493232] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:11.995 15:41:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:11.995 15:41:42 -- common/autotest_common.sh@850 -- # return 0 00:23:11.995 15:41:42 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:11.995 15:41:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:11.995 15:41:42 -- common/autotest_common.sh@10 -- # set +x 00:23:11.995 15:41:42 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:11.995 15:41:42 -- target/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:12.253 [2024-04-26 15:41:42.492406] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:12.253 15:41:42 -- target/multipath.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:12.510 Malloc0 00:23:12.768 15:41:42 -- target/multipath.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -r 00:23:13.026 15:41:43 -- target/multipath.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:13.283 15:41:43 -- target/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:13.539 [2024-04-26 15:41:43.643842] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:13.539 15:41:43 -- target/multipath.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 00:23:13.797 [2024-04-26 15:41:43.916195] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:23:13.797 15:41:43 -- target/multipath.sh@67 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 --hostid=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 -g -G 00:23:14.057 15:41:44 -- target/multipath.sh@68 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 --hostid=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.3 -s 4420 -g -G 00:23:14.313 15:41:44 -- target/multipath.sh@69 -- # waitforserial SPDKISFASTANDAWESOME 00:23:14.313 15:41:44 -- common/autotest_common.sh@1184 -- # local i=0 00:23:14.313 15:41:44 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:23:14.313 15:41:44 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:23:14.313 15:41:44 -- common/autotest_common.sh@1191 -- # sleep 2 00:23:16.211 15:41:46 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:23:16.211 15:41:46 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:23:16.211 15:41:46 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:23:16.212 15:41:46 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:23:16.212 15:41:46 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:23:16.212 15:41:46 -- common/autotest_common.sh@1194 -- # return 0 00:23:16.212 15:41:46 -- target/multipath.sh@72 -- # get_subsystem nqn.2016-06.io.spdk:cnode1 SPDKISFASTANDAWESOME 00:23:16.212 15:41:46 -- target/multipath.sh@34 -- # local nqn=nqn.2016-06.io.spdk:cnode1 serial=SPDKISFASTANDAWESOME s 00:23:16.212 15:41:46 -- target/multipath.sh@36 -- # for s in /sys/class/nvme-subsystem/* 00:23:16.212 15:41:46 -- target/multipath.sh@37 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:23:16.212 15:41:46 -- target/multipath.sh@37 -- # [[ SPDKISFASTANDAWESOME == \S\P\D\K\I\S\F\A\S\T\A\N\D\A\W\E\S\O\M\E ]] 00:23:16.212 15:41:46 -- target/multipath.sh@38 -- # echo nvme-subsys0 00:23:16.212 15:41:46 -- target/multipath.sh@38 -- # return 0 00:23:16.212 15:41:46 -- target/multipath.sh@72 -- # subsystem=nvme-subsys0 00:23:16.212 15:41:46 -- target/multipath.sh@73 -- # paths=(/sys/class/nvme-subsystem/$subsystem/nvme*/nvme*c*) 00:23:16.212 15:41:46 -- target/multipath.sh@74 -- # paths=("${paths[@]##*/}") 00:23:16.212 15:41:46 -- target/multipath.sh@76 -- # (( 2 == 2 )) 00:23:16.212 15:41:46 -- target/multipath.sh@78 -- # p0=nvme0c0n1 00:23:16.212 15:41:46 -- target/multipath.sh@79 -- # p1=nvme0c1n1 00:23:16.212 15:41:46 -- target/multipath.sh@81 -- # check_ana_state nvme0c0n1 optimized 00:23:16.212 15:41:46 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:23:16.212 15:41:46 -- target/multipath.sh@22 -- # local timeout=20 00:23:16.212 15:41:46 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:23:16.212 15:41:46 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:23:16.212 15:41:46 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:23:16.212 15:41:46 -- target/multipath.sh@82 -- # check_ana_state nvme0c1n1 optimized 00:23:16.212 15:41:46 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:23:16.212 15:41:46 -- target/multipath.sh@22 -- # local timeout=20 00:23:16.212 15:41:46 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:23:16.212 15:41:46 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:23:16.212 15:41:46 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:23:16.212 15:41:46 -- target/multipath.sh@85 -- # echo numa 00:23:16.212 15:41:46 -- target/multipath.sh@88 -- # fio_pid=74334 00:23:16.212 15:41:46 -- target/multipath.sh@90 -- # sleep 1 00:23:16.212 15:41:46 -- target/multipath.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:23:16.212 [global] 00:23:16.212 thread=1 00:23:16.212 invalidate=1 00:23:16.212 rw=randrw 00:23:16.212 time_based=1 00:23:16.212 runtime=6 00:23:16.212 ioengine=libaio 00:23:16.212 direct=1 00:23:16.212 bs=4096 00:23:16.212 iodepth=128 00:23:16.212 norandommap=0 00:23:16.212 numjobs=1 00:23:16.212 00:23:16.212 verify_dump=1 00:23:16.212 verify_backlog=512 00:23:16.212 verify_state_save=0 00:23:16.212 do_verify=1 00:23:16.212 verify=crc32c-intel 00:23:16.212 [job0] 00:23:16.212 filename=/dev/nvme0n1 00:23:16.212 Could not set queue depth (nvme0n1) 00:23:16.469 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:23:16.469 fio-3.35 00:23:16.469 Starting 1 thread 00:23:17.403 15:41:47 -- target/multipath.sh@92 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:17.403 15:41:47 -- target/multipath.sh@93 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:23:17.662 15:41:47 -- target/multipath.sh@95 -- # check_ana_state nvme0c0n1 inaccessible 00:23:17.662 15:41:47 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:23:17.662 15:41:47 -- target/multipath.sh@22 -- # local timeout=20 00:23:17.662 15:41:47 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:23:17.662 15:41:47 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:23:17.662 15:41:47 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:23:17.662 15:41:47 -- target/multipath.sh@96 -- # check_ana_state nvme0c1n1 non-optimized 00:23:17.662 15:41:47 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:23:17.662 15:41:47 -- target/multipath.sh@22 -- # local timeout=20 00:23:17.662 15:41:47 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:23:17.662 15:41:47 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:23:17.662 15:41:47 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:23:17.662 15:41:47 -- target/multipath.sh@25 -- # sleep 1s 00:23:19.036 15:41:48 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:23:19.036 15:41:48 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:23:19.036 15:41:48 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:23:19.036 15:41:48 -- target/multipath.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:19.036 15:41:49 -- target/multipath.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:23:19.294 15:41:49 -- target/multipath.sh@101 -- # check_ana_state nvme0c0n1 non-optimized 00:23:19.294 15:41:49 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:23:19.294 15:41:49 -- target/multipath.sh@22 -- # local timeout=20 00:23:19.294 15:41:49 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:23:19.294 15:41:49 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:23:19.294 15:41:49 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:23:19.294 15:41:49 -- target/multipath.sh@102 -- # check_ana_state nvme0c1n1 inaccessible 00:23:19.294 15:41:49 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:23:19.294 15:41:49 -- target/multipath.sh@22 -- # local timeout=20 00:23:19.294 15:41:49 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:23:19.294 15:41:49 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:23:19.294 15:41:49 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:23:19.294 15:41:49 -- target/multipath.sh@25 -- # sleep 1s 00:23:20.227 15:41:50 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:23:20.227 15:41:50 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:23:20.227 15:41:50 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:23:20.227 15:41:50 -- target/multipath.sh@104 -- # wait 74334 00:23:22.761 00:23:22.761 job0: (groupid=0, jobs=1): err= 0: pid=74355: Fri Apr 26 15:41:52 2024 00:23:22.761 read: IOPS=11.0k, BW=43.0MiB/s (45.1MB/s)(258MiB/6007msec) 00:23:22.761 slat (usec): min=5, max=6173, avg=51.58, stdev=232.36 00:23:22.761 clat (usec): min=599, max=18356, avg=7908.59, stdev=1246.27 00:23:22.761 lat (usec): min=929, max=18366, avg=7960.16, stdev=1256.00 00:23:22.761 clat percentiles (usec): 00:23:22.761 | 1.00th=[ 4621], 5.00th=[ 6128], 10.00th=[ 6718], 20.00th=[ 7177], 00:23:22.761 | 30.00th=[ 7373], 40.00th=[ 7504], 50.00th=[ 7701], 60.00th=[ 8029], 00:23:22.761 | 70.00th=[ 8356], 80.00th=[ 8717], 90.00th=[ 9241], 95.00th=[10028], 00:23:22.761 | 99.00th=[11863], 99.50th=[12256], 99.90th=[13304], 99.95th=[16712], 00:23:22.761 | 99.99th=[17957] 00:23:22.761 bw ( KiB/s): min= 8712, max=30784, per=52.80%, avg=23253.09, stdev=6420.30, samples=11 00:23:22.761 iops : min= 2178, max= 7696, avg=5813.27, stdev=1605.07, samples=11 00:23:22.761 write: IOPS=6524, BW=25.5MiB/s (26.7MB/s)(137MiB/5386msec); 0 zone resets 00:23:22.761 slat (usec): min=12, max=5639, avg=63.48, stdev=163.47 00:23:22.761 clat (usec): min=441, max=17610, avg=6797.60, stdev=1136.58 00:23:22.761 lat (usec): min=560, max=17635, avg=6861.08, stdev=1141.19 00:23:22.761 clat percentiles (usec): 00:23:22.761 | 1.00th=[ 3523], 5.00th=[ 4752], 10.00th=[ 5669], 20.00th=[ 6194], 00:23:22.761 | 30.00th=[ 6456], 40.00th=[ 6652], 50.00th=[ 6849], 60.00th=[ 7046], 00:23:22.761 | 70.00th=[ 7242], 80.00th=[ 7439], 90.00th=[ 7832], 95.00th=[ 8291], 00:23:22.761 | 99.00th=[10159], 99.50th=[10814], 99.90th=[13042], 99.95th=[16450], 00:23:22.761 | 99.99th=[17695] 00:23:22.761 bw ( KiB/s): min= 9248, max=30064, per=89.17%, avg=23271.27, stdev=5990.80, samples=11 00:23:22.761 iops : min= 2312, max= 7516, avg=5817.82, stdev=1497.70, samples=11 00:23:22.761 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:23:22.761 lat (msec) : 2=0.09%, 4=0.73%, 10=95.20%, 20=3.96% 00:23:22.761 cpu : usr=5.51%, sys=23.34%, ctx=6540, majf=0, minf=96 00:23:22.761 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:23:22.761 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:22.761 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:22.761 issued rwts: total=66140,35139,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:22.761 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:22.761 00:23:22.761 Run status group 0 (all jobs): 00:23:22.761 READ: bw=43.0MiB/s (45.1MB/s), 43.0MiB/s-43.0MiB/s (45.1MB/s-45.1MB/s), io=258MiB (271MB), run=6007-6007msec 00:23:22.761 WRITE: bw=25.5MiB/s (26.7MB/s), 25.5MiB/s-25.5MiB/s (26.7MB/s-26.7MB/s), io=137MiB (144MB), run=5386-5386msec 00:23:22.761 00:23:22.761 Disk stats (read/write): 00:23:22.761 nvme0n1: ios=65171/34510, merge=0/0, ticks=482525/217671, in_queue=700196, util=98.66% 00:23:22.761 15:41:52 -- target/multipath.sh@106 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:22.761 15:41:52 -- target/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n optimized 00:23:23.021 15:41:53 -- target/multipath.sh@109 -- # check_ana_state nvme0c0n1 optimized 00:23:23.021 15:41:53 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=optimized 00:23:23.021 15:41:53 -- target/multipath.sh@22 -- # local timeout=20 00:23:23.021 15:41:53 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:23:23.021 15:41:53 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:23:23.021 15:41:53 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:23:23.021 15:41:53 -- target/multipath.sh@110 -- # check_ana_state nvme0c1n1 optimized 00:23:23.021 15:41:53 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=optimized 00:23:23.021 15:41:53 -- target/multipath.sh@22 -- # local timeout=20 00:23:23.021 15:41:53 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:23:23.021 15:41:53 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:23:23.021 15:41:53 -- target/multipath.sh@25 -- # [[ inaccessible != \o\p\t\i\m\i\z\e\d ]] 00:23:23.021 15:41:53 -- target/multipath.sh@25 -- # sleep 1s 00:23:24.393 15:41:54 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:23:24.393 15:41:54 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:23:24.393 15:41:54 -- target/multipath.sh@25 -- # [[ optimized != \o\p\t\i\m\i\z\e\d ]] 00:23:24.393 15:41:54 -- target/multipath.sh@113 -- # echo round-robin 00:23:24.393 15:41:54 -- target/multipath.sh@116 -- # fio_pid=74489 00:23:24.393 15:41:54 -- target/multipath.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randrw -r 6 -v 00:23:24.393 15:41:54 -- target/multipath.sh@118 -- # sleep 1 00:23:24.393 [global] 00:23:24.393 thread=1 00:23:24.393 invalidate=1 00:23:24.393 rw=randrw 00:23:24.393 time_based=1 00:23:24.393 runtime=6 00:23:24.393 ioengine=libaio 00:23:24.393 direct=1 00:23:24.393 bs=4096 00:23:24.393 iodepth=128 00:23:24.393 norandommap=0 00:23:24.393 numjobs=1 00:23:24.393 00:23:24.393 verify_dump=1 00:23:24.393 verify_backlog=512 00:23:24.393 verify_state_save=0 00:23:24.393 do_verify=1 00:23:24.393 verify=crc32c-intel 00:23:24.393 [job0] 00:23:24.393 filename=/dev/nvme0n1 00:23:24.393 Could not set queue depth (nvme0n1) 00:23:24.393 job0: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:23:24.393 fio-3.35 00:23:24.393 Starting 1 thread 00:23:25.327 15:41:55 -- target/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:25.327 15:41:55 -- target/multipath.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n non_optimized 00:23:25.585 15:41:55 -- target/multipath.sh@123 -- # check_ana_state nvme0c0n1 inaccessible 00:23:25.585 15:41:55 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=inaccessible 00:23:25.585 15:41:55 -- target/multipath.sh@22 -- # local timeout=20 00:23:25.585 15:41:55 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:23:25.585 15:41:55 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:23:25.585 15:41:55 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:23:25.585 15:41:55 -- target/multipath.sh@124 -- # check_ana_state nvme0c1n1 non-optimized 00:23:25.585 15:41:55 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=non-optimized 00:23:25.585 15:41:55 -- target/multipath.sh@22 -- # local timeout=20 00:23:25.585 15:41:55 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:23:25.585 15:41:55 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:23:25.585 15:41:55 -- target/multipath.sh@25 -- # [[ optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:23:25.585 15:41:55 -- target/multipath.sh@25 -- # sleep 1s 00:23:26.960 15:41:56 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:23:26.960 15:41:56 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:23:26.960 15:41:56 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:23:26.960 15:41:56 -- target/multipath.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:26.960 15:41:57 -- target/multipath.sh@127 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.3 -s 4420 -n inaccessible 00:23:27.218 15:41:57 -- target/multipath.sh@129 -- # check_ana_state nvme0c0n1 non-optimized 00:23:27.218 15:41:57 -- target/multipath.sh@18 -- # local path=nvme0c0n1 ana_state=non-optimized 00:23:27.218 15:41:57 -- target/multipath.sh@22 -- # local timeout=20 00:23:27.218 15:41:57 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c0n1/ana_state 00:23:27.218 15:41:57 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c0n1/ana_state ]] 00:23:27.218 15:41:57 -- target/multipath.sh@25 -- # [[ non-optimized != \n\o\n\-\o\p\t\i\m\i\z\e\d ]] 00:23:27.218 15:41:57 -- target/multipath.sh@130 -- # check_ana_state nvme0c1n1 inaccessible 00:23:27.218 15:41:57 -- target/multipath.sh@18 -- # local path=nvme0c1n1 ana_state=inaccessible 00:23:27.218 15:41:57 -- target/multipath.sh@22 -- # local timeout=20 00:23:27.218 15:41:57 -- target/multipath.sh@23 -- # local ana_state_f=/sys/block/nvme0c1n1/ana_state 00:23:27.218 15:41:57 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:23:27.218 15:41:57 -- target/multipath.sh@25 -- # [[ non-optimized != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:23:27.218 15:41:57 -- target/multipath.sh@25 -- # sleep 1s 00:23:28.150 15:41:58 -- target/multipath.sh@26 -- # (( timeout-- == 0 )) 00:23:28.150 15:41:58 -- target/multipath.sh@25 -- # [[ ! -e /sys/block/nvme0c1n1/ana_state ]] 00:23:28.150 15:41:58 -- target/multipath.sh@25 -- # [[ inaccessible != \i\n\a\c\c\e\s\s\i\b\l\e ]] 00:23:28.150 15:41:58 -- target/multipath.sh@132 -- # wait 74489 00:23:30.721 00:23:30.721 job0: (groupid=0, jobs=1): err= 0: pid=74510: Fri Apr 26 15:42:00 2024 00:23:30.721 read: IOPS=12.4k, BW=48.5MiB/s (50.8MB/s)(291MiB/6004msec) 00:23:30.721 slat (usec): min=2, max=7294, avg=41.39, stdev=201.43 00:23:30.721 clat (usec): min=330, max=44474, avg=7138.34, stdev=1538.54 00:23:30.721 lat (usec): min=341, max=44483, avg=7179.72, stdev=1556.06 00:23:30.721 clat percentiles (usec): 00:23:30.721 | 1.00th=[ 3294], 5.00th=[ 4424], 10.00th=[ 4948], 20.00th=[ 5866], 00:23:30.721 | 30.00th=[ 6849], 40.00th=[ 7177], 50.00th=[ 7308], 60.00th=[ 7504], 00:23:30.721 | 70.00th=[ 7767], 80.00th=[ 8225], 90.00th=[ 8717], 95.00th=[ 9241], 00:23:30.721 | 99.00th=[11207], 99.50th=[11600], 99.90th=[12387], 99.95th=[12649], 00:23:30.721 | 99.99th=[13435] 00:23:30.721 bw ( KiB/s): min=11696, max=44008, per=54.78%, avg=27179.91, stdev=9274.85, samples=11 00:23:30.721 iops : min= 2924, max=11002, avg=6794.91, stdev=2318.75, samples=11 00:23:30.721 write: IOPS=7601, BW=29.7MiB/s (31.1MB/s)(152MiB/5119msec); 0 zone resets 00:23:30.721 slat (usec): min=3, max=1940, avg=52.72, stdev=129.43 00:23:30.721 clat (usec): min=163, max=12634, avg=5921.52, stdev=1481.58 00:23:30.721 lat (usec): min=213, max=12655, avg=5974.24, stdev=1494.94 00:23:30.721 clat percentiles (usec): 00:23:30.721 | 1.00th=[ 2540], 5.00th=[ 3294], 10.00th=[ 3720], 20.00th=[ 4359], 00:23:30.721 | 30.00th=[ 5145], 40.00th=[ 5997], 50.00th=[ 6325], 60.00th=[ 6652], 00:23:30.721 | 70.00th=[ 6849], 80.00th=[ 7111], 90.00th=[ 7439], 95.00th=[ 7701], 00:23:30.721 | 99.00th=[ 9241], 99.50th=[ 9896], 99.90th=[11469], 99.95th=[11863], 00:23:30.721 | 99.99th=[12387] 00:23:30.721 bw ( KiB/s): min=12103, max=43128, per=89.35%, avg=27168.91, stdev=8995.79, samples=11 00:23:30.721 iops : min= 3025, max=10782, avg=6792.09, stdev=2249.11, samples=11 00:23:30.721 lat (usec) : 250=0.01%, 500=0.01%, 750=0.03%, 1000=0.02% 00:23:30.721 lat (msec) : 2=0.19%, 4=6.28%, 10=91.56%, 20=1.92%, 50=0.01% 00:23:30.721 cpu : usr=6.45%, sys=24.69%, ctx=8411, majf=0, minf=133 00:23:30.721 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.7% 00:23:30.721 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:30.721 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:30.721 issued rwts: total=74472,38912,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:30.721 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:30.721 00:23:30.721 Run status group 0 (all jobs): 00:23:30.721 READ: bw=48.5MiB/s (50.8MB/s), 48.5MiB/s-48.5MiB/s (50.8MB/s-50.8MB/s), io=291MiB (305MB), run=6004-6004msec 00:23:30.721 WRITE: bw=29.7MiB/s (31.1MB/s), 29.7MiB/s-29.7MiB/s (31.1MB/s-31.1MB/s), io=152MiB (159MB), run=5119-5119msec 00:23:30.721 00:23:30.721 Disk stats (read/write): 00:23:30.721 nvme0n1: ios=72964/38912, merge=0/0, ticks=478874/206249, in_queue=685123, util=98.60% 00:23:30.721 15:42:00 -- target/multipath.sh@134 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:23:30.721 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:23:30.721 15:42:00 -- target/multipath.sh@135 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:23:30.721 15:42:00 -- common/autotest_common.sh@1205 -- # local i=0 00:23:30.721 15:42:00 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:23:30.721 15:42:00 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:30.721 15:42:00 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:23:30.721 15:42:00 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:23:30.721 15:42:00 -- common/autotest_common.sh@1217 -- # return 0 00:23:30.721 15:42:00 -- target/multipath.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:30.721 15:42:00 -- target/multipath.sh@139 -- # rm -f ./local-job0-0-verify.state 00:23:30.721 15:42:00 -- target/multipath.sh@140 -- # rm -f ./local-job1-1-verify.state 00:23:30.721 15:42:00 -- target/multipath.sh@142 -- # trap - SIGINT SIGTERM EXIT 00:23:30.721 15:42:00 -- target/multipath.sh@144 -- # nvmftestfini 00:23:30.721 15:42:00 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:30.721 15:42:00 -- nvmf/common.sh@117 -- # sync 00:23:30.721 15:42:00 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:30.721 15:42:00 -- nvmf/common.sh@120 -- # set +e 00:23:30.721 15:42:00 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:30.721 15:42:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:30.721 rmmod nvme_tcp 00:23:30.721 rmmod nvme_fabrics 00:23:30.721 rmmod nvme_keyring 00:23:30.721 15:42:00 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:30.721 15:42:00 -- nvmf/common.sh@124 -- # set -e 00:23:30.721 15:42:00 -- nvmf/common.sh@125 -- # return 0 00:23:30.721 15:42:00 -- nvmf/common.sh@478 -- # '[' -n 74190 ']' 00:23:30.721 15:42:00 -- nvmf/common.sh@479 -- # killprocess 74190 00:23:30.721 15:42:00 -- common/autotest_common.sh@936 -- # '[' -z 74190 ']' 00:23:30.721 15:42:00 -- common/autotest_common.sh@940 -- # kill -0 74190 00:23:30.721 15:42:00 -- common/autotest_common.sh@941 -- # uname 00:23:30.721 15:42:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:30.721 15:42:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74190 00:23:30.978 killing process with pid 74190 00:23:30.978 15:42:01 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:23:30.978 15:42:01 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:23:30.978 15:42:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74190' 00:23:30.978 15:42:01 -- common/autotest_common.sh@955 -- # kill 74190 00:23:30.978 15:42:01 -- common/autotest_common.sh@960 -- # wait 74190 00:23:31.235 15:42:01 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:31.235 15:42:01 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:31.235 15:42:01 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:31.235 15:42:01 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:31.235 15:42:01 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:31.235 15:42:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:31.235 15:42:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:31.235 15:42:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:31.235 15:42:01 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:31.235 00:23:31.235 real 0m20.666s 00:23:31.235 user 1m21.067s 00:23:31.235 sys 0m6.551s 00:23:31.235 15:42:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:31.235 15:42:01 -- common/autotest_common.sh@10 -- # set +x 00:23:31.235 ************************************ 00:23:31.235 END TEST nvmf_multipath 00:23:31.235 ************************************ 00:23:31.235 15:42:01 -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:23:31.235 15:42:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:31.235 15:42:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:31.235 15:42:01 -- common/autotest_common.sh@10 -- # set +x 00:23:31.235 ************************************ 00:23:31.235 START TEST nvmf_zcopy 00:23:31.235 ************************************ 00:23:31.235 15:42:01 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:23:31.493 * Looking for test storage... 00:23:31.493 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:31.493 15:42:01 -- target/zcopy.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:31.493 15:42:01 -- nvmf/common.sh@7 -- # uname -s 00:23:31.493 15:42:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:31.493 15:42:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:31.493 15:42:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:31.493 15:42:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:31.493 15:42:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:31.493 15:42:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:31.493 15:42:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:31.493 15:42:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:31.493 15:42:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:31.493 15:42:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:31.493 15:42:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:23:31.493 15:42:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:23:31.493 15:42:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:31.493 15:42:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:31.493 15:42:01 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:31.493 15:42:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:31.493 15:42:01 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:31.493 15:42:01 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:31.493 15:42:01 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:31.493 15:42:01 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:31.493 15:42:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.493 15:42:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.493 15:42:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.493 15:42:01 -- paths/export.sh@5 -- # export PATH 00:23:31.493 15:42:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:31.493 15:42:01 -- nvmf/common.sh@47 -- # : 0 00:23:31.493 15:42:01 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:31.493 15:42:01 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:31.493 15:42:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:31.493 15:42:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:31.493 15:42:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:31.493 15:42:01 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:31.493 15:42:01 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:31.493 15:42:01 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:31.493 15:42:01 -- target/zcopy.sh@12 -- # nvmftestinit 00:23:31.493 15:42:01 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:31.493 15:42:01 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:31.493 15:42:01 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:31.493 15:42:01 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:31.493 15:42:01 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:31.493 15:42:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:31.493 15:42:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:31.493 15:42:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:31.493 15:42:01 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:23:31.493 15:42:01 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:23:31.493 15:42:01 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:23:31.493 15:42:01 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:23:31.493 15:42:01 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:23:31.493 15:42:01 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:23:31.493 15:42:01 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:31.493 15:42:01 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:31.493 15:42:01 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:31.493 15:42:01 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:31.493 15:42:01 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:31.493 15:42:01 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:31.493 15:42:01 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:31.493 15:42:01 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:31.493 15:42:01 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:31.493 15:42:01 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:31.493 15:42:01 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:31.493 15:42:01 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:31.493 15:42:01 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:31.493 15:42:01 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:31.493 Cannot find device "nvmf_tgt_br" 00:23:31.493 15:42:01 -- nvmf/common.sh@155 -- # true 00:23:31.493 15:42:01 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:31.493 Cannot find device "nvmf_tgt_br2" 00:23:31.493 15:42:01 -- nvmf/common.sh@156 -- # true 00:23:31.493 15:42:01 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:31.493 15:42:01 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:31.493 Cannot find device "nvmf_tgt_br" 00:23:31.493 15:42:01 -- nvmf/common.sh@158 -- # true 00:23:31.493 15:42:01 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:31.493 Cannot find device "nvmf_tgt_br2" 00:23:31.493 15:42:01 -- nvmf/common.sh@159 -- # true 00:23:31.493 15:42:01 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:31.493 15:42:01 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:31.493 15:42:01 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:31.493 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:31.493 15:42:01 -- nvmf/common.sh@162 -- # true 00:23:31.493 15:42:01 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:31.493 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:31.493 15:42:01 -- nvmf/common.sh@163 -- # true 00:23:31.494 15:42:01 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:31.494 15:42:01 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:31.494 15:42:01 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:31.494 15:42:01 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:31.751 15:42:01 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:31.751 15:42:01 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:31.751 15:42:01 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:31.751 15:42:01 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:31.751 15:42:01 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:31.751 15:42:01 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:31.751 15:42:01 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:31.751 15:42:01 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:31.751 15:42:01 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:31.751 15:42:01 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:31.751 15:42:01 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:31.751 15:42:01 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:31.751 15:42:01 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:31.751 15:42:01 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:31.751 15:42:01 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:31.751 15:42:01 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:31.751 15:42:01 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:31.751 15:42:01 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:31.751 15:42:01 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:31.751 15:42:01 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:31.751 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:31.751 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.089 ms 00:23:31.751 00:23:31.751 --- 10.0.0.2 ping statistics --- 00:23:31.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:31.751 rtt min/avg/max/mdev = 0.089/0.089/0.089/0.000 ms 00:23:31.751 15:42:01 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:31.751 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:31.751 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.057 ms 00:23:31.751 00:23:31.751 --- 10.0.0.3 ping statistics --- 00:23:31.751 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:31.752 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:23:31.752 15:42:01 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:31.752 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:31.752 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.029 ms 00:23:31.752 00:23:31.752 --- 10.0.0.1 ping statistics --- 00:23:31.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:31.752 rtt min/avg/max/mdev = 0.029/0.029/0.029/0.000 ms 00:23:31.752 15:42:01 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:31.752 15:42:01 -- nvmf/common.sh@422 -- # return 0 00:23:31.752 15:42:01 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:31.752 15:42:01 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:31.752 15:42:01 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:31.752 15:42:01 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:31.752 15:42:01 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:31.752 15:42:01 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:31.752 15:42:01 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:31.752 15:42:02 -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:23:31.752 15:42:02 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:31.752 15:42:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:31.752 15:42:02 -- common/autotest_common.sh@10 -- # set +x 00:23:31.752 15:42:02 -- nvmf/common.sh@470 -- # nvmfpid=74792 00:23:31.752 15:42:02 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:31.752 15:42:02 -- nvmf/common.sh@471 -- # waitforlisten 74792 00:23:31.752 15:42:02 -- common/autotest_common.sh@817 -- # '[' -z 74792 ']' 00:23:31.752 15:42:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:31.752 15:42:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:31.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:31.752 15:42:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:31.752 15:42:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:31.752 15:42:02 -- common/autotest_common.sh@10 -- # set +x 00:23:32.008 [2024-04-26 15:42:02.068226] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:23:32.008 [2024-04-26 15:42:02.068330] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:32.008 [2024-04-26 15:42:02.205314] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.265 [2024-04-26 15:42:02.338834] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:32.265 [2024-04-26 15:42:02.338904] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:32.265 [2024-04-26 15:42:02.338920] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:32.265 [2024-04-26 15:42:02.338930] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:32.265 [2024-04-26 15:42:02.338939] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:32.265 [2024-04-26 15:42:02.338993] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:32.829 15:42:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:32.829 15:42:03 -- common/autotest_common.sh@850 -- # return 0 00:23:32.829 15:42:03 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:32.829 15:42:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:32.829 15:42:03 -- common/autotest_common.sh@10 -- # set +x 00:23:32.829 15:42:03 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:32.829 15:42:03 -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:23:32.829 15:42:03 -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:23:32.829 15:42:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:32.829 15:42:03 -- common/autotest_common.sh@10 -- # set +x 00:23:32.829 [2024-04-26 15:42:03.101807] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:32.829 15:42:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:32.829 15:42:03 -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:23:32.829 15:42:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:32.829 15:42:03 -- common/autotest_common.sh@10 -- # set +x 00:23:32.829 15:42:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:32.829 15:42:03 -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:32.829 15:42:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:32.829 15:42:03 -- common/autotest_common.sh@10 -- # set +x 00:23:32.829 [2024-04-26 15:42:03.117880] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:32.829 15:42:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:32.830 15:42:03 -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:32.830 15:42:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:32.830 15:42:03 -- common/autotest_common.sh@10 -- # set +x 00:23:33.086 15:42:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:33.086 15:42:03 -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:23:33.086 15:42:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:33.086 15:42:03 -- common/autotest_common.sh@10 -- # set +x 00:23:33.086 malloc0 00:23:33.086 15:42:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:33.086 15:42:03 -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:33.087 15:42:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:33.087 15:42:03 -- common/autotest_common.sh@10 -- # set +x 00:23:33.087 15:42:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:33.087 15:42:03 -- target/zcopy.sh@33 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:23:33.087 15:42:03 -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:23:33.087 15:42:03 -- nvmf/common.sh@521 -- # config=() 00:23:33.087 15:42:03 -- nvmf/common.sh@521 -- # local subsystem config 00:23:33.087 15:42:03 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:33.087 15:42:03 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:33.087 { 00:23:33.087 "params": { 00:23:33.087 "name": "Nvme$subsystem", 00:23:33.087 "trtype": "$TEST_TRANSPORT", 00:23:33.087 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:33.087 "adrfam": "ipv4", 00:23:33.087 "trsvcid": "$NVMF_PORT", 00:23:33.087 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:33.087 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:33.087 "hdgst": ${hdgst:-false}, 00:23:33.087 "ddgst": ${ddgst:-false} 00:23:33.087 }, 00:23:33.087 "method": "bdev_nvme_attach_controller" 00:23:33.087 } 00:23:33.087 EOF 00:23:33.087 )") 00:23:33.087 15:42:03 -- nvmf/common.sh@543 -- # cat 00:23:33.087 15:42:03 -- nvmf/common.sh@545 -- # jq . 00:23:33.087 15:42:03 -- nvmf/common.sh@546 -- # IFS=, 00:23:33.087 15:42:03 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:23:33.087 "params": { 00:23:33.087 "name": "Nvme1", 00:23:33.087 "trtype": "tcp", 00:23:33.087 "traddr": "10.0.0.2", 00:23:33.087 "adrfam": "ipv4", 00:23:33.087 "trsvcid": "4420", 00:23:33.087 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:33.087 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:33.087 "hdgst": false, 00:23:33.087 "ddgst": false 00:23:33.087 }, 00:23:33.087 "method": "bdev_nvme_attach_controller" 00:23:33.087 }' 00:23:33.087 [2024-04-26 15:42:03.214769] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:23:33.087 [2024-04-26 15:42:03.214887] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74843 ] 00:23:33.087 [2024-04-26 15:42:03.357235] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.358 [2024-04-26 15:42:03.490366] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:33.615 Running I/O for 10 seconds... 00:23:43.588 00:23:43.588 Latency(us) 00:23:43.588 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:43.588 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:23:43.588 Verification LBA range: start 0x0 length 0x1000 00:23:43.588 Nvme1n1 : 10.02 6031.63 47.12 0.00 0.00 21153.84 2353.34 32887.16 00:23:43.588 =================================================================================================================== 00:23:43.588 Total : 6031.63 47.12 0.00 0.00 21153.84 2353.34 32887.16 00:23:43.846 15:42:14 -- target/zcopy.sh@39 -- # perfpid=74967 00:23:43.846 15:42:14 -- target/zcopy.sh@41 -- # xtrace_disable 00:23:43.846 15:42:14 -- common/autotest_common.sh@10 -- # set +x 00:23:43.846 15:42:14 -- target/zcopy.sh@37 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:23:43.846 15:42:14 -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:23:43.846 15:42:14 -- nvmf/common.sh@521 -- # config=() 00:23:43.846 15:42:14 -- nvmf/common.sh@521 -- # local subsystem config 00:23:43.846 15:42:14 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:23:43.846 15:42:14 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:23:43.846 { 00:23:43.846 "params": { 00:23:43.846 "name": "Nvme$subsystem", 00:23:43.846 "trtype": "$TEST_TRANSPORT", 00:23:43.846 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:43.846 "adrfam": "ipv4", 00:23:43.846 "trsvcid": "$NVMF_PORT", 00:23:43.846 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:43.846 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:43.846 "hdgst": ${hdgst:-false}, 00:23:43.846 "ddgst": ${ddgst:-false} 00:23:43.846 }, 00:23:43.846 "method": "bdev_nvme_attach_controller" 00:23:43.846 } 00:23:43.846 EOF 00:23:43.846 )") 00:23:43.846 15:42:14 -- nvmf/common.sh@543 -- # cat 00:23:43.846 [2024-04-26 15:42:14.014170] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.846 [2024-04-26 15:42:14.014214] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.846 15:42:14 -- nvmf/common.sh@545 -- # jq . 00:23:43.846 15:42:14 -- nvmf/common.sh@546 -- # IFS=, 00:23:43.846 15:42:14 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:23:43.846 "params": { 00:23:43.846 "name": "Nvme1", 00:23:43.846 "trtype": "tcp", 00:23:43.846 "traddr": "10.0.0.2", 00:23:43.846 "adrfam": "ipv4", 00:23:43.846 "trsvcid": "4420", 00:23:43.846 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:43.846 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:43.846 "hdgst": false, 00:23:43.846 "ddgst": false 00:23:43.846 }, 00:23:43.846 "method": "bdev_nvme_attach_controller" 00:23:43.846 }' 00:23:43.846 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:43.846 [2024-04-26 15:42:14.026131] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.846 [2024-04-26 15:42:14.026178] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.846 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:43.846 [2024-04-26 15:42:14.034104] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.846 [2024-04-26 15:42:14.034133] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.846 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:43.846 [2024-04-26 15:42:14.046121] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.846 [2024-04-26 15:42:14.046163] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.846 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:43.846 [2024-04-26 15:42:14.054118] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.846 [2024-04-26 15:42:14.054159] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.846 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:43.846 [2024-04-26 15:42:14.066119] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.846 [2024-04-26 15:42:14.066161] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.846 [2024-04-26 15:42:14.067619] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:23:43.846 [2024-04-26 15:42:14.068202] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74967 ] 00:23:43.846 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:43.846 [2024-04-26 15:42:14.074108] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.846 [2024-04-26 15:42:14.074134] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.846 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:43.846 [2024-04-26 15:42:14.082110] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.846 [2024-04-26 15:42:14.082147] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.846 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:43.846 [2024-04-26 15:42:14.090113] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.846 [2024-04-26 15:42:14.090152] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.846 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:43.846 [2024-04-26 15:42:14.098118] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.846 [2024-04-26 15:42:14.098158] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.846 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:43.846 [2024-04-26 15:42:14.110154] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.846 [2024-04-26 15:42:14.110194] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.846 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:43.846 [2024-04-26 15:42:14.118125] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.846 [2024-04-26 15:42:14.118167] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.846 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:43.846 [2024-04-26 15:42:14.126128] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.846 [2024-04-26 15:42:14.126169] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.846 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:43.846 [2024-04-26 15:42:14.134152] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:43.846 [2024-04-26 15:42:14.134185] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:43.846 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.105 [2024-04-26 15:42:14.142128] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.105 [2024-04-26 15:42:14.142176] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.105 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.105 [2024-04-26 15:42:14.154170] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.105 [2024-04-26 15:42:14.154210] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.105 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.105 [2024-04-26 15:42:14.166170] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.105 [2024-04-26 15:42:14.166207] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.105 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.105 [2024-04-26 15:42:14.178191] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.105 [2024-04-26 15:42:14.178234] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.105 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.105 [2024-04-26 15:42:14.186162] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.105 [2024-04-26 15:42:14.186196] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.105 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.105 [2024-04-26 15:42:14.198182] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.105 [2024-04-26 15:42:14.198227] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.105 [2024-04-26 15:42:14.200969] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:44.105 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.105 [2024-04-26 15:42:14.210193] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.105 [2024-04-26 15:42:14.210234] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.105 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.105 [2024-04-26 15:42:14.222190] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.105 [2024-04-26 15:42:14.222231] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.105 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.105 [2024-04-26 15:42:14.234191] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.105 [2024-04-26 15:42:14.234229] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.105 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.105 [2024-04-26 15:42:14.242174] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.105 [2024-04-26 15:42:14.242208] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.105 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.105 [2024-04-26 15:42:14.250171] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.105 [2024-04-26 15:42:14.250205] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.105 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.105 [2024-04-26 15:42:14.258169] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.105 [2024-04-26 15:42:14.258199] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.105 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.105 [2024-04-26 15:42:14.270209] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.105 [2024-04-26 15:42:14.270250] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.105 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.105 [2024-04-26 15:42:14.278188] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.105 [2024-04-26 15:42:14.278223] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.105 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.106 [2024-04-26 15:42:14.286183] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.106 [2024-04-26 15:42:14.286229] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.106 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.106 [2024-04-26 15:42:14.294187] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.106 [2024-04-26 15:42:14.294220] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.106 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.106 [2024-04-26 15:42:14.302183] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.106 [2024-04-26 15:42:14.302212] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.106 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.106 [2024-04-26 15:42:14.310196] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.106 [2024-04-26 15:42:14.310232] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.106 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.106 [2024-04-26 15:42:14.317278] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:44.106 [2024-04-26 15:42:14.318188] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.106 [2024-04-26 15:42:14.318215] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.106 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.106 [2024-04-26 15:42:14.326202] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.106 [2024-04-26 15:42:14.326235] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.106 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.106 [2024-04-26 15:42:14.338237] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.106 [2024-04-26 15:42:14.338291] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.106 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.106 [2024-04-26 15:42:14.350241] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.106 [2024-04-26 15:42:14.350283] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.106 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.106 [2024-04-26 15:42:14.362246] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.106 [2024-04-26 15:42:14.362289] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.106 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.106 [2024-04-26 15:42:14.374246] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.106 [2024-04-26 15:42:14.374291] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.106 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.106 [2024-04-26 15:42:14.386254] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.106 [2024-04-26 15:42:14.386297] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.106 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.365 [2024-04-26 15:42:14.398264] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.365 [2024-04-26 15:42:14.398311] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.365 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.365 [2024-04-26 15:42:14.406221] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.365 [2024-04-26 15:42:14.406259] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.365 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.365 [2024-04-26 15:42:14.414237] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.365 [2024-04-26 15:42:14.414272] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.365 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.365 [2024-04-26 15:42:14.422212] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.365 [2024-04-26 15:42:14.422242] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.365 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.365 [2024-04-26 15:42:14.430265] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.365 [2024-04-26 15:42:14.430302] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.366 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.366 [2024-04-26 15:42:14.438236] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.366 [2024-04-26 15:42:14.438273] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.366 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.366 [2024-04-26 15:42:14.446242] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.366 [2024-04-26 15:42:14.446280] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.366 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.366 [2024-04-26 15:42:14.454251] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.366 [2024-04-26 15:42:14.454288] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.366 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.366 [2024-04-26 15:42:14.462257] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.366 [2024-04-26 15:42:14.462293] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.366 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.366 [2024-04-26 15:42:14.470262] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.366 [2024-04-26 15:42:14.470300] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.366 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.366 [2024-04-26 15:42:14.478277] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.366 [2024-04-26 15:42:14.478313] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.366 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.366 [2024-04-26 15:42:14.486265] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.366 [2024-04-26 15:42:14.486318] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.366 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.366 [2024-04-26 15:42:14.494264] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.366 [2024-04-26 15:42:14.494301] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.366 Running I/O for 5 seconds... 00:23:44.366 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.366 [2024-04-26 15:42:14.502262] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.366 [2024-04-26 15:42:14.502295] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.366 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.366 [2024-04-26 15:42:14.515500] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.366 [2024-04-26 15:42:14.515550] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.366 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.366 [2024-04-26 15:42:14.528092] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.366 [2024-04-26 15:42:14.528148] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.366 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.366 [2024-04-26 15:42:14.545375] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.366 [2024-04-26 15:42:14.545432] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.366 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.366 [2024-04-26 15:42:14.561880] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.366 [2024-04-26 15:42:14.561929] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.366 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.366 [2024-04-26 15:42:14.572343] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.366 [2024-04-26 15:42:14.572388] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.366 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.366 [2024-04-26 15:42:14.583191] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.366 [2024-04-26 15:42:14.583233] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.366 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.366 [2024-04-26 15:42:14.595926] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.366 [2024-04-26 15:42:14.595964] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.366 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.366 [2024-04-26 15:42:14.606161] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.366 [2024-04-26 15:42:14.606205] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.366 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.366 [2024-04-26 15:42:14.616831] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.366 [2024-04-26 15:42:14.616882] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.366 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.366 [2024-04-26 15:42:14.629775] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.366 [2024-04-26 15:42:14.629821] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.366 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.366 [2024-04-26 15:42:14.639947] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.366 [2024-04-26 15:42:14.639994] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.366 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.366 [2024-04-26 15:42:14.654495] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.366 [2024-04-26 15:42:14.654542] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.366 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.626 [2024-04-26 15:42:14.664465] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.626 [2024-04-26 15:42:14.664507] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.626 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.626 [2024-04-26 15:42:14.675409] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.626 [2024-04-26 15:42:14.675457] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.626 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.626 [2024-04-26 15:42:14.690504] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.626 [2024-04-26 15:42:14.690557] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.626 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.626 [2024-04-26 15:42:14.708274] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.626 [2024-04-26 15:42:14.708326] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.626 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.626 [2024-04-26 15:42:14.723707] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.626 [2024-04-26 15:42:14.723762] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.626 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.626 [2024-04-26 15:42:14.740310] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.626 [2024-04-26 15:42:14.740362] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.626 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.626 [2024-04-26 15:42:14.750613] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.626 [2024-04-26 15:42:14.750659] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.626 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.626 [2024-04-26 15:42:14.761338] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.626 [2024-04-26 15:42:14.761381] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.626 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.626 [2024-04-26 15:42:14.772460] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.626 [2024-04-26 15:42:14.772508] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.626 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.626 [2024-04-26 15:42:14.783611] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.626 [2024-04-26 15:42:14.783656] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.626 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.626 [2024-04-26 15:42:14.796094] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.626 [2024-04-26 15:42:14.796149] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.626 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.626 [2024-04-26 15:42:14.806354] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.626 [2024-04-26 15:42:14.806396] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.626 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.626 [2024-04-26 15:42:14.816737] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.626 [2024-04-26 15:42:14.816793] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.626 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.626 [2024-04-26 15:42:14.828125] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.626 [2024-04-26 15:42:14.828184] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.626 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.626 [2024-04-26 15:42:14.839362] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.626 [2024-04-26 15:42:14.839408] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.626 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.626 [2024-04-26 15:42:14.851863] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.626 [2024-04-26 15:42:14.851916] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.626 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.626 [2024-04-26 15:42:14.868409] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.626 [2024-04-26 15:42:14.868468] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.626 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.626 [2024-04-26 15:42:14.885579] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.626 [2024-04-26 15:42:14.885636] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.626 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.626 [2024-04-26 15:42:14.902030] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.626 [2024-04-26 15:42:14.902087] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.626 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.626 [2024-04-26 15:42:14.918196] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.626 [2024-04-26 15:42:14.918260] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.885 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.885 [2024-04-26 15:42:14.934894] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.885 [2024-04-26 15:42:14.934950] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.885 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.885 [2024-04-26 15:42:14.945241] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.885 [2024-04-26 15:42:14.945287] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.885 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.885 [2024-04-26 15:42:14.960723] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.885 [2024-04-26 15:42:14.960783] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.885 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.885 [2024-04-26 15:42:14.976484] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.885 [2024-04-26 15:42:14.976550] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.885 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.885 [2024-04-26 15:42:14.994076] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.885 [2024-04-26 15:42:14.994149] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.885 2024/04/26 15:42:14 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.885 [2024-04-26 15:42:15.008349] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.885 [2024-04-26 15:42:15.008401] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.885 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.885 [2024-04-26 15:42:15.024291] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.885 [2024-04-26 15:42:15.024349] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.885 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.885 [2024-04-26 15:42:15.035044] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.885 [2024-04-26 15:42:15.035098] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.885 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.885 [2024-04-26 15:42:15.049361] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.885 [2024-04-26 15:42:15.049416] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.885 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.885 [2024-04-26 15:42:15.064904] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.885 [2024-04-26 15:42:15.064962] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.885 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.885 [2024-04-26 15:42:15.081219] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.885 [2024-04-26 15:42:15.081275] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.885 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.885 [2024-04-26 15:42:15.097079] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.885 [2024-04-26 15:42:15.097131] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.885 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.885 [2024-04-26 15:42:15.111949] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.885 [2024-04-26 15:42:15.112007] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.885 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.885 [2024-04-26 15:42:15.125961] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.885 [2024-04-26 15:42:15.126018] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.885 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.885 [2024-04-26 15:42:15.140966] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.885 [2024-04-26 15:42:15.141017] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.886 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.886 [2024-04-26 15:42:15.156870] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.886 [2024-04-26 15:42:15.156928] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.886 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:44.886 [2024-04-26 15:42:15.168124] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:44.886 [2024-04-26 15:42:15.168184] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:44.886 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.143 [2024-04-26 15:42:15.180130] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.143 [2024-04-26 15:42:15.180197] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.143 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.143 [2024-04-26 15:42:15.194516] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.143 [2024-04-26 15:42:15.194567] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.143 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.143 [2024-04-26 15:42:15.210650] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.143 [2024-04-26 15:42:15.210709] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.143 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.143 [2024-04-26 15:42:15.226870] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.143 [2024-04-26 15:42:15.226927] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.143 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.143 [2024-04-26 15:42:15.238508] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.143 [2024-04-26 15:42:15.238563] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.143 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.143 [2024-04-26 15:42:15.253252] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.143 [2024-04-26 15:42:15.253500] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.144 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.144 [2024-04-26 15:42:15.268564] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.144 [2024-04-26 15:42:15.268823] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.144 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.144 [2024-04-26 15:42:15.283823] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.144 [2024-04-26 15:42:15.284068] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.144 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.144 [2024-04-26 15:42:15.301471] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.144 [2024-04-26 15:42:15.301752] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.144 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.144 [2024-04-26 15:42:15.316705] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.144 [2024-04-26 15:42:15.316933] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.144 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.144 [2024-04-26 15:42:15.333548] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.144 [2024-04-26 15:42:15.333816] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.144 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.144 [2024-04-26 15:42:15.349923] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.144 [2024-04-26 15:42:15.350148] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.144 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.144 [2024-04-26 15:42:15.359778] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.144 [2024-04-26 15:42:15.359821] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.144 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.144 [2024-04-26 15:42:15.371212] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.144 [2024-04-26 15:42:15.371250] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.144 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.144 [2024-04-26 15:42:15.383564] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.144 [2024-04-26 15:42:15.383615] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.144 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.144 [2024-04-26 15:42:15.393457] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.144 [2024-04-26 15:42:15.393639] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.144 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.144 [2024-04-26 15:42:15.404769] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.144 [2024-04-26 15:42:15.404970] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.144 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.144 [2024-04-26 15:42:15.418184] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.144 [2024-04-26 15:42:15.418378] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.144 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.144 [2024-04-26 15:42:15.433552] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.144 [2024-04-26 15:42:15.433770] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.402 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.402 [2024-04-26 15:42:15.444154] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.402 [2024-04-26 15:42:15.444323] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.402 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.402 [2024-04-26 15:42:15.454942] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.402 [2024-04-26 15:42:15.454989] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.402 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.402 [2024-04-26 15:42:15.465711] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.402 [2024-04-26 15:42:15.465764] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.402 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.402 [2024-04-26 15:42:15.480208] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.402 [2024-04-26 15:42:15.480263] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.402 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.402 [2024-04-26 15:42:15.497508] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.402 [2024-04-26 15:42:15.497728] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.402 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.402 [2024-04-26 15:42:15.508829] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.402 [2024-04-26 15:42:15.509001] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.402 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.402 [2024-04-26 15:42:15.519949] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.402 [2024-04-26 15:42:15.520130] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.403 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.403 [2024-04-26 15:42:15.534344] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.403 [2024-04-26 15:42:15.534553] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.403 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.403 [2024-04-26 15:42:15.551723] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.403 [2024-04-26 15:42:15.551947] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.403 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.403 [2024-04-26 15:42:15.562410] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.403 [2024-04-26 15:42:15.562581] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.403 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.403 [2024-04-26 15:42:15.573442] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.403 [2024-04-26 15:42:15.573615] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.403 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.403 [2024-04-26 15:42:15.590915] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.403 [2024-04-26 15:42:15.590967] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.403 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.403 [2024-04-26 15:42:15.608718] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.403 [2024-04-26 15:42:15.608768] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.403 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.403 [2024-04-26 15:42:15.623906] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.403 [2024-04-26 15:42:15.624092] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.403 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.403 [2024-04-26 15:42:15.634740] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.403 [2024-04-26 15:42:15.634908] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.403 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.403 [2024-04-26 15:42:15.645863] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.403 [2024-04-26 15:42:15.646024] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.403 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.403 [2024-04-26 15:42:15.658869] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.403 [2024-04-26 15:42:15.659051] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.403 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.403 [2024-04-26 15:42:15.675388] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.403 [2024-04-26 15:42:15.675611] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.403 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.403 [2024-04-26 15:42:15.690788] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.403 [2024-04-26 15:42:15.690991] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.403 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.662 [2024-04-26 15:42:15.706688] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.662 [2024-04-26 15:42:15.706914] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.662 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.662 [2024-04-26 15:42:15.723043] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.662 [2024-04-26 15:42:15.723328] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.662 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.662 [2024-04-26 15:42:15.740732] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.662 [2024-04-26 15:42:15.740972] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.662 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.662 [2024-04-26 15:42:15.751655] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.662 [2024-04-26 15:42:15.751841] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.662 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.662 [2024-04-26 15:42:15.763180] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.662 [2024-04-26 15:42:15.763349] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.662 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.662 [2024-04-26 15:42:15.779884] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.662 [2024-04-26 15:42:15.780106] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.662 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.662 [2024-04-26 15:42:15.796018] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.662 [2024-04-26 15:42:15.796070] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.662 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.662 [2024-04-26 15:42:15.814531] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.662 [2024-04-26 15:42:15.814585] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.662 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.662 [2024-04-26 15:42:15.825583] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.662 [2024-04-26 15:42:15.825753] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.662 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.662 [2024-04-26 15:42:15.838674] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.662 [2024-04-26 15:42:15.838844] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.662 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.662 [2024-04-26 15:42:15.855534] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.662 [2024-04-26 15:42:15.855731] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.662 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.662 [2024-04-26 15:42:15.869965] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.662 [2024-04-26 15:42:15.870179] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.662 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.662 [2024-04-26 15:42:15.887912] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.662 [2024-04-26 15:42:15.888118] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.662 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.662 [2024-04-26 15:42:15.902958] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.662 [2024-04-26 15:42:15.903244] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.662 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.662 [2024-04-26 15:42:15.913954] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.662 [2024-04-26 15:42:15.914152] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.662 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.662 [2024-04-26 15:42:15.924851] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.662 [2024-04-26 15:42:15.924897] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.662 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.662 [2024-04-26 15:42:15.939192] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.662 [2024-04-26 15:42:15.939246] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.663 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.663 [2024-04-26 15:42:15.954717] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.663 [2024-04-26 15:42:15.954784] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.663 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.922 [2024-04-26 15:42:15.965368] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.922 [2024-04-26 15:42:15.965417] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.922 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.922 [2024-04-26 15:42:15.978091] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.922 [2024-04-26 15:42:15.978156] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.922 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.922 [2024-04-26 15:42:15.993366] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.922 [2024-04-26 15:42:15.993584] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.922 2024/04/26 15:42:15 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.922 [2024-04-26 15:42:16.009122] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.922 [2024-04-26 15:42:16.009352] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.922 2024/04/26 15:42:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.922 [2024-04-26 15:42:16.019603] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.922 [2024-04-26 15:42:16.019806] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.922 2024/04/26 15:42:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.922 [2024-04-26 15:42:16.032630] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.922 [2024-04-26 15:42:16.032847] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.922 2024/04/26 15:42:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.922 [2024-04-26 15:42:16.048069] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.922 [2024-04-26 15:42:16.048288] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.922 2024/04/26 15:42:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.922 [2024-04-26 15:42:16.064743] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.922 [2024-04-26 15:42:16.064959] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.922 2024/04/26 15:42:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.922 [2024-04-26 15:42:16.080189] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.922 [2024-04-26 15:42:16.080431] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.922 2024/04/26 15:42:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.922 [2024-04-26 15:42:16.095811] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.922 [2024-04-26 15:42:16.095867] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.922 2024/04/26 15:42:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.922 [2024-04-26 15:42:16.111332] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.922 [2024-04-26 15:42:16.111387] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.922 2024/04/26 15:42:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.922 [2024-04-26 15:42:16.121696] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.922 [2024-04-26 15:42:16.121876] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.922 2024/04/26 15:42:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.922 [2024-04-26 15:42:16.133000] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.922 [2024-04-26 15:42:16.133052] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.922 2024/04/26 15:42:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.922 [2024-04-26 15:42:16.147976] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.922 [2024-04-26 15:42:16.148035] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.922 2024/04/26 15:42:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.922 [2024-04-26 15:42:16.159418] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.922 [2024-04-26 15:42:16.159469] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.922 2024/04/26 15:42:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.922 [2024-04-26 15:42:16.174045] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.922 [2024-04-26 15:42:16.174324] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.923 2024/04/26 15:42:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.923 [2024-04-26 15:42:16.189656] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.923 [2024-04-26 15:42:16.189900] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.923 2024/04/26 15:42:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:45.923 [2024-04-26 15:42:16.207024] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:45.923 [2024-04-26 15:42:16.207262] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:45.923 2024/04/26 15:42:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:46.182 [2024-04-26 15:42:16.222548] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:46.182 [2024-04-26 15:42:16.222746] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:46.182 2024/04/26 15:42:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:46.182 [2024-04-26 15:42:16.239820] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:46.182 [2024-04-26 15:42:16.240039] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:46.182 2024/04/26 15:42:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:46.182 [2024-04-26 15:42:16.254549] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:46.182 [2024-04-26 15:42:16.254757] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:46.182 2024/04/26 15:42:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:46.182 [2024-04-26 15:42:16.272894] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:46.182 [2024-04-26 15:42:16.272951] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:46.182 2024/04/26 15:42:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:46.182 [2024-04-26 15:42:16.288448] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:46.182 [2024-04-26 15:42:16.288511] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:46.182 2024/04/26 15:42:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:46.182 [2024-04-26 15:42:16.304600] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:46.182 [2024-04-26 15:42:16.304817] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:46.182 2024/04/26 15:42:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:46.182 [2024-04-26 15:42:16.322864] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:46.182 [2024-04-26 15:42:16.323078] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:46.182 2024/04/26 15:42:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:46.182 [2024-04-26 15:42:16.333844] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:46.182 [2024-04-26 15:42:16.334009] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:46.182 2024/04/26 15:42:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:46.182 [2024-04-26 15:42:16.346661] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:46.182 [2024-04-26 15:42:16.346873] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:46.182 2024/04/26 15:42:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:46.182 [2024-04-26 15:42:16.357402] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:46.182 [2024-04-26 15:42:16.357444] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:46.182 2024/04/26 15:42:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:46.182 [2024-04-26 15:42:16.372094] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:46.182 [2024-04-26 15:42:16.372155] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:46.182 2024/04/26 15:42:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:46.182 [2024-04-26 15:42:16.389910] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:46.182 [2024-04-26 15:42:16.389966] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:46.182 2024/04/26 15:42:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:46.182 [2024-04-26 15:42:16.405773] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:46.182 [2024-04-26 15:42:16.405986] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:46.182 2024/04/26 15:42:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:46.182 [2024-04-26 15:42:16.421715] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:46.182 [2024-04-26 15:42:16.421951] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:46.182 2024/04/26 15:42:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:46.182 [2024-04-26 15:42:16.432163] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:46.182 [2024-04-26 15:42:16.432358] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:46.182 2024/04/26 15:42:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:46.182 [2024-04-26 15:42:16.447107] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:46.182 [2024-04-26 15:42:16.447323] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:46.182 2024/04/26 15:42:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:46.182 [2024-04-26 15:42:16.465051] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:46.182 [2024-04-26 15:42:16.465291] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:46.182 2024/04/26 15:42:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:46.440 [2024-04-26 15:42:16.480085] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:46.440 [2024-04-26 15:42:16.480307] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:46.440 2024/04/26 15:42:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:46.440 [2024-04-26 15:42:16.495447] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:46.440 [2024-04-26 15:42:16.495667] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:46.440 2024/04/26 15:42:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:46.440 [2024-04-26 15:42:16.512817] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:46.440 [2024-04-26 15:42:16.513043] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:46.440 2024/04/26 15:42:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:46.440 [2024-04-26 15:42:16.527947] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:46.440 [2024-04-26 15:42:16.527996] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:46.440 2024/04/26 15:42:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:46.440 [2024-04-26 15:42:16.544757] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:46.440 [2024-04-26 15:42:16.544974] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:46.440 2024/04/26 15:42:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:46.440 [2024-04-26 15:42:16.560642] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:46.440 [2024-04-26 15:42:16.560866] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:46.440 2024/04/26 15:42:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:46.440 [2024-04-26 15:42:16.578834] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:46.440 [2024-04-26 15:42:16.579059] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:46.440 2024/04/26 15:42:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:46.440 [2024-04-26 15:42:16.594688] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:46.440 [2024-04-26 15:42:16.594903] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:46.440 2024/04/26 15:42:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:46.440 [2024-04-26 15:42:16.611573] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:46.440 [2024-04-26 15:42:16.611823] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:46.440 2024/04/26 15:42:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:46.440 [2024-04-26 15:42:16.629655] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:46.440 [2024-04-26 15:42:16.629878] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:46.440 2024/04/26 15:42:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:46.440 [2024-04-26 15:42:16.645182] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:46.440 [2024-04-26 15:42:16.645227] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:46.440 2024/04/26 15:42:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:46.440 [2024-04-26 15:42:16.664127] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:46.440 [2024-04-26 15:42:16.664203] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:46.440 2024/04/26 15:42:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:46.440 [2024-04-26 15:42:16.680028] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:46.441 [2024-04-26 15:42:16.680282] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:46.441 2024/04/26 15:42:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:46.441 [2024-04-26 15:42:16.697556] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:46.441 [2024-04-26 15:42:16.697786] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:46.441 2024/04/26 15:42:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:46.441 [2024-04-26 15:42:16.712773] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:46.441 [2024-04-26 15:42:16.712994] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:46.441 2024/04/26 15:42:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:46.441 [2024-04-26 15:42:16.722762] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:46.441 [2024-04-26 15:42:16.722957] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:46.441 2024/04/26 15:42:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:46.699 [2024-04-26 15:42:16.733951] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:46.699 [2024-04-26 15:42:16.734153] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:46.699 2024/04/26 15:42:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:46.699 [2024-04-26 15:42:16.751106] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:46.699 [2024-04-26 15:42:16.751370] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:46.699 2024/04/26 15:42:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:46.699 [2024-04-26 15:42:16.761598] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:46.699 [2024-04-26 15:42:16.761786] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:46.699 2024/04/26 15:42:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:46.699 [2024-04-26 15:42:16.776243] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:46.699 [2024-04-26 15:42:16.776447] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:46.699 2024/04/26 15:42:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:46.699 [2024-04-26 15:42:16.792268] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:46.699 [2024-04-26 15:42:16.792320] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:46.699 2024/04/26 15:42:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:46.699 [2024-04-26 15:42:16.811031] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:46.699 [2024-04-26 15:42:16.811348] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:46.699 2024/04/26 15:42:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:46.699 [2024-04-26 15:42:16.826420] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:46.699 [2024-04-26 15:42:16.826654] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:46.699 2024/04/26 15:42:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:46.699 [2024-04-26 15:42:16.843361] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:46.699 [2024-04-26 15:42:16.843418] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:46.699 2024/04/26 15:42:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:46.699 [2024-04-26 15:42:16.860850] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:46.699 [2024-04-26 15:42:16.860914] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:46.699 2024/04/26 15:42:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:46.699 [2024-04-26 15:42:16.874663] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:46.699 [2024-04-26 15:42:16.874719] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:46.699 2024/04/26 15:42:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:46.699 [2024-04-26 15:42:16.892072] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:46.699 [2024-04-26 15:42:16.892159] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:46.699 2024/04/26 15:42:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:46.699 [2024-04-26 15:42:16.908001] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:46.699 [2024-04-26 15:42:16.908256] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:46.699 2024/04/26 15:42:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:46.699 [2024-04-26 15:42:16.924419] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:46.699 [2024-04-26 15:42:16.924652] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:46.699 2024/04/26 15:42:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:46.699 [2024-04-26 15:42:16.941813] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:46.699 [2024-04-26 15:42:16.942126] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:46.699 2024/04/26 15:42:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:46.699 [2024-04-26 15:42:16.957123] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:46.699 [2024-04-26 15:42:16.957394] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:46.700 2024/04/26 15:42:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:46.700 [2024-04-26 15:42:16.967764] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:46.700 [2024-04-26 15:42:16.967959] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:46.700 2024/04/26 15:42:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:46.700 [2024-04-26 15:42:16.983703] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:46.700 [2024-04-26 15:42:16.983952] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:46.700 2024/04/26 15:42:16 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:46.958 [2024-04-26 15:42:16.999562] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:46.958 [2024-04-26 15:42:16.999622] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:46.958 2024/04/26 15:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:46.958 [2024-04-26 15:42:17.008863] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:46.958 [2024-04-26 15:42:17.008912] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:46.958 2024/04/26 15:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:46.958 [2024-04-26 15:42:17.020949] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:46.958 [2024-04-26 15:42:17.021159] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:46.958 2024/04/26 15:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:46.958 [2024-04-26 15:42:17.034308] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:46.958 [2024-04-26 15:42:17.034501] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:46.958 2024/04/26 15:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:46.958 [2024-04-26 15:42:17.049911] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:46.958 [2024-04-26 15:42:17.050118] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:46.958 2024/04/26 15:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:46.958 [2024-04-26 15:42:17.059932] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:46.958 [2024-04-26 15:42:17.060105] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:46.958 2024/04/26 15:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:46.958 [2024-04-26 15:42:17.074179] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:46.958 [2024-04-26 15:42:17.074228] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:46.958 2024/04/26 15:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:46.958 [2024-04-26 15:42:17.087748] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:46.958 [2024-04-26 15:42:17.087805] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:46.958 2024/04/26 15:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:46.958 [2024-04-26 15:42:17.104949] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:46.958 [2024-04-26 15:42:17.105161] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:46.958 2024/04/26 15:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:46.958 [2024-04-26 15:42:17.119845] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:46.958 [2024-04-26 15:42:17.120061] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:46.958 2024/04/26 15:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:46.959 [2024-04-26 15:42:17.135589] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:46.959 [2024-04-26 15:42:17.135892] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:46.959 2024/04/26 15:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:46.959 [2024-04-26 15:42:17.149537] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:46.959 [2024-04-26 15:42:17.149610] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:46.959 2024/04/26 15:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:46.959 [2024-04-26 15:42:17.166781] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:46.959 [2024-04-26 15:42:17.167044] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:46.959 2024/04/26 15:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:46.959 [2024-04-26 15:42:17.184574] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:46.959 [2024-04-26 15:42:17.184824] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:46.959 2024/04/26 15:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:46.959 [2024-04-26 15:42:17.201263] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:46.959 [2024-04-26 15:42:17.201482] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:46.959 2024/04/26 15:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:46.959 [2024-04-26 15:42:17.216935] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:46.959 [2024-04-26 15:42:17.217185] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:46.959 2024/04/26 15:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:46.959 [2024-04-26 15:42:17.235590] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:46.959 [2024-04-26 15:42:17.235786] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:46.959 2024/04/26 15:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:46.959 [2024-04-26 15:42:17.246398] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:46.959 [2024-04-26 15:42:17.246496] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:46.959 2024/04/26 15:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:47.218 [2024-04-26 15:42:17.256924] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:47.218 [2024-04-26 15:42:17.257131] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:47.218 2024/04/26 15:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:47.218 [2024-04-26 15:42:17.268244] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:47.218 [2024-04-26 15:42:17.268424] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:47.218 2024/04/26 15:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:47.218 [2024-04-26 15:42:17.280739] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:47.218 [2024-04-26 15:42:17.280788] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:47.218 2024/04/26 15:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:47.218 [2024-04-26 15:42:17.290487] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:47.218 [2024-04-26 15:42:17.290538] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:47.218 2024/04/26 15:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:47.218 [2024-04-26 15:42:17.306481] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:47.218 [2024-04-26 15:42:17.306535] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:47.218 2024/04/26 15:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:47.218 [2024-04-26 15:42:17.317290] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:47.218 [2024-04-26 15:42:17.317465] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:47.218 2024/04/26 15:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:47.218 [2024-04-26 15:42:17.328642] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:47.218 [2024-04-26 15:42:17.328832] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:47.218 2024/04/26 15:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:47.218 [2024-04-26 15:42:17.340184] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:47.218 [2024-04-26 15:42:17.340228] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:47.218 2024/04/26 15:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:47.218 [2024-04-26 15:42:17.351640] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:47.218 [2024-04-26 15:42:17.351695] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:47.218 2024/04/26 15:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:47.218 [2024-04-26 15:42:17.366570] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:47.218 [2024-04-26 15:42:17.366770] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:47.218 2024/04/26 15:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:47.218 [2024-04-26 15:42:17.382906] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:47.219 [2024-04-26 15:42:17.383113] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:47.219 2024/04/26 15:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:47.219 [2024-04-26 15:42:17.399377] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:47.219 [2024-04-26 15:42:17.399611] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:47.219 2024/04/26 15:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:47.219 [2024-04-26 15:42:17.415626] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:47.219 [2024-04-26 15:42:17.415890] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:47.219 2024/04/26 15:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:47.219 [2024-04-26 15:42:17.432479] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:47.219 [2024-04-26 15:42:17.432733] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:47.219 2024/04/26 15:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:47.219 [2024-04-26 15:42:17.448638] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:47.219 [2024-04-26 15:42:17.448886] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:47.219 2024/04/26 15:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:47.219 [2024-04-26 15:42:17.466523] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:47.219 [2024-04-26 15:42:17.466785] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:47.219 2024/04/26 15:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:47.219 [2024-04-26 15:42:17.481553] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:47.219 [2024-04-26 15:42:17.481606] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:47.219 2024/04/26 15:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:47.219 [2024-04-26 15:42:17.498337] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:47.219 [2024-04-26 15:42:17.498564] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:47.219 2024/04/26 15:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:47.477 [2024-04-26 15:42:17.514051] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:47.477 [2024-04-26 15:42:17.514321] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:47.477 2024/04/26 15:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:47.477 [2024-04-26 15:42:17.524867] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:47.477 [2024-04-26 15:42:17.525123] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:47.477 2024/04/26 15:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:47.477 [2024-04-26 15:42:17.539932] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:47.477 [2024-04-26 15:42:17.540186] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:47.477 2024/04/26 15:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:47.477 [2024-04-26 15:42:17.555679] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:47.477 [2024-04-26 15:42:17.555938] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:47.477 2024/04/26 15:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:47.477 [2024-04-26 15:42:17.573672] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:47.477 [2024-04-26 15:42:17.573912] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:47.477 2024/04/26 15:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:47.477 [2024-04-26 15:42:17.583870] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:47.477 [2024-04-26 15:42:17.584072] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:47.477 2024/04/26 15:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:47.477 [2024-04-26 15:42:17.595644] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:47.477 [2024-04-26 15:42:17.595702] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:47.477 2024/04/26 15:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:47.477 [2024-04-26 15:42:17.606209] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:47.477 [2024-04-26 15:42:17.606256] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:47.478 2024/04/26 15:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:47.478 [2024-04-26 15:42:17.617342] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:47.478 [2024-04-26 15:42:17.617391] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:47.478 2024/04/26 15:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:47.478 [2024-04-26 15:42:17.628239] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:47.478 [2024-04-26 15:42:17.628415] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:47.478 2024/04/26 15:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:47.478 [2024-04-26 15:42:17.643772] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:47.478 [2024-04-26 15:42:17.643990] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:47.478 2024/04/26 15:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:47.478 [2024-04-26 15:42:17.659927] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:47.478 [2024-04-26 15:42:17.660113] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:47.478 2024/04/26 15:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:47.478 [2024-04-26 15:42:17.675952] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:47.478 [2024-04-26 15:42:17.676189] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:47.478 2024/04/26 15:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:47.478 [2024-04-26 15:42:17.692491] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:47.478 [2024-04-26 15:42:17.692712] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:47.478 2024/04/26 15:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:47.478 [2024-04-26 15:42:17.705922] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:47.478 [2024-04-26 15:42:17.705972] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:47.478 2024/04/26 15:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:47.478 [2024-04-26 15:42:17.723218] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:47.478 [2024-04-26 15:42:17.723458] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:47.478 2024/04/26 15:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:47.478 [2024-04-26 15:42:17.739281] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:47.478 [2024-04-26 15:42:17.739552] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:47.478 2024/04/26 15:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:47.478 [2024-04-26 15:42:17.759593] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:47.478 [2024-04-26 15:42:17.759666] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:47.478 2024/04/26 15:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:47.735 [2024-04-26 15:42:17.779807] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:47.735 [2024-04-26 15:42:17.779885] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:47.736 2024/04/26 15:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:47.736 [2024-04-26 15:42:17.798728] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:47.736 [2024-04-26 15:42:17.799064] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:47.736 2024/04/26 15:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:47.736 [2024-04-26 15:42:17.818673] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:47.736 [2024-04-26 15:42:17.819014] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:47.736 2024/04/26 15:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:47.736 [2024-04-26 15:42:17.840262] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:47.736 [2024-04-26 15:42:17.840590] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:47.736 2024/04/26 15:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:47.736 [2024-04-26 15:42:17.858491] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:47.736 [2024-04-26 15:42:17.858728] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:47.736 2024/04/26 15:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:47.736 [2024-04-26 15:42:17.873843] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:47.736 [2024-04-26 15:42:17.874088] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:47.736 2024/04/26 15:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:47.736 [2024-04-26 15:42:17.883999] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:47.736 [2024-04-26 15:42:17.884236] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:47.736 2024/04/26 15:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:47.736 [2024-04-26 15:42:17.898978] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:47.736 [2024-04-26 15:42:17.899035] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:47.736 2024/04/26 15:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:47.736 [2024-04-26 15:42:17.909336] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:47.736 [2024-04-26 15:42:17.909533] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:47.736 2024/04/26 15:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:47.736 [2024-04-26 15:42:17.923956] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:47.736 [2024-04-26 15:42:17.924216] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:47.736 2024/04/26 15:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:47.736 [2024-04-26 15:42:17.934624] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:47.736 [2024-04-26 15:42:17.934674] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:47.736 2024/04/26 15:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:47.736 [2024-04-26 15:42:17.949917] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:47.736 [2024-04-26 15:42:17.950127] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:47.736 2024/04/26 15:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:47.736 [2024-04-26 15:42:17.965110] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:47.736 [2024-04-26 15:42:17.965353] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:47.736 2024/04/26 15:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:47.736 [2024-04-26 15:42:17.975570] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:47.736 [2024-04-26 15:42:17.975759] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:47.736 2024/04/26 15:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:47.736 [2024-04-26 15:42:17.987248] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:47.736 [2024-04-26 15:42:17.987439] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:47.736 2024/04/26 15:42:17 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:47.736 [2024-04-26 15:42:18.002854] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:47.736 [2024-04-26 15:42:18.003087] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:47.736 2024/04/26 15:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:47.736 [2024-04-26 15:42:18.013410] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:47.736 [2024-04-26 15:42:18.013592] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:47.736 2024/04/26 15:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:47.736 [2024-04-26 15:42:18.028021] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:47.736 [2024-04-26 15:42:18.028220] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:47.736 2024/04/26 15:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:47.995 [2024-04-26 15:42:18.038849] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:47.995 [2024-04-26 15:42:18.039030] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:47.995 2024/04/26 15:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:47.995 [2024-04-26 15:42:18.053747] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:47.995 [2024-04-26 15:42:18.053798] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:47.995 2024/04/26 15:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:47.995 [2024-04-26 15:42:18.071245] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:47.995 [2024-04-26 15:42:18.071297] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:47.995 2024/04/26 15:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:47.995 [2024-04-26 15:42:18.086594] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:47.995 [2024-04-26 15:42:18.086816] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:47.995 2024/04/26 15:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:47.995 [2024-04-26 15:42:18.104497] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:47.995 [2024-04-26 15:42:18.104725] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:47.995 2024/04/26 15:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:47.995 [2024-04-26 15:42:18.118967] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:47.995 [2024-04-26 15:42:18.119205] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:47.995 2024/04/26 15:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:47.995 [2024-04-26 15:42:18.135815] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:47.995 [2024-04-26 15:42:18.136011] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:47.995 2024/04/26 15:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:47.995 [2024-04-26 15:42:18.150892] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:47.995 [2024-04-26 15:42:18.151085] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:47.995 2024/04/26 15:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:47.995 [2024-04-26 15:42:18.168050] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:47.995 [2024-04-26 15:42:18.168286] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:47.995 2024/04/26 15:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:47.995 [2024-04-26 15:42:18.183391] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:47.995 [2024-04-26 15:42:18.183441] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:47.995 2024/04/26 15:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:47.995 [2024-04-26 15:42:18.199078] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:47.995 [2024-04-26 15:42:18.199130] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:47.995 2024/04/26 15:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:47.995 [2024-04-26 15:42:18.215932] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:47.995 [2024-04-26 15:42:18.216123] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:47.995 2024/04/26 15:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:47.995 [2024-04-26 15:42:18.231702] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:47.995 [2024-04-26 15:42:18.231902] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:47.995 2024/04/26 15:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:47.995 [2024-04-26 15:42:18.249064] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:47.996 [2024-04-26 15:42:18.249302] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:47.996 2024/04/26 15:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:47.996 [2024-04-26 15:42:18.266109] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:47.996 [2024-04-26 15:42:18.266360] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:47.996 2024/04/26 15:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:47.996 [2024-04-26 15:42:18.282969] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:47.996 [2024-04-26 15:42:18.283191] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:47.996 2024/04/26 15:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:48.255 [2024-04-26 15:42:18.299091] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:48.255 [2024-04-26 15:42:18.299371] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:48.255 2024/04/26 15:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:48.255 [2024-04-26 15:42:18.316675] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:48.255 [2024-04-26 15:42:18.316877] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:48.255 2024/04/26 15:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:48.255 [2024-04-26 15:42:18.331283] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:48.255 [2024-04-26 15:42:18.331333] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:48.255 2024/04/26 15:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:48.255 [2024-04-26 15:42:18.347234] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:48.255 [2024-04-26 15:42:18.347283] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:48.255 2024/04/26 15:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:48.255 [2024-04-26 15:42:18.363948] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:48.255 [2024-04-26 15:42:18.364216] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:48.255 2024/04/26 15:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:48.255 [2024-04-26 15:42:18.381532] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:48.255 [2024-04-26 15:42:18.381798] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:48.255 2024/04/26 15:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:48.255 [2024-04-26 15:42:18.393260] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:48.255 [2024-04-26 15:42:18.393490] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:48.255 2024/04/26 15:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:48.255 [2024-04-26 15:42:18.408302] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:48.255 [2024-04-26 15:42:18.408604] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:48.255 2024/04/26 15:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:48.255 [2024-04-26 15:42:18.422214] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:48.255 [2024-04-26 15:42:18.422466] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:48.255 2024/04/26 15:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:48.255 [2024-04-26 15:42:18.433466] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:48.255 [2024-04-26 15:42:18.433651] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:48.255 2024/04/26 15:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:48.255 [2024-04-26 15:42:18.444543] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:48.255 [2024-04-26 15:42:18.444732] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:48.255 2024/04/26 15:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:48.255 [2024-04-26 15:42:18.458624] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:48.255 [2024-04-26 15:42:18.458679] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:48.255 2024/04/26 15:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:48.255 [2024-04-26 15:42:18.474642] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:48.255 [2024-04-26 15:42:18.474837] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:48.255 2024/04/26 15:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:48.255 [2024-04-26 15:42:18.493407] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:48.255 [2024-04-26 15:42:18.493670] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:48.255 2024/04/26 15:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:48.255 [2024-04-26 15:42:18.504690] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:48.255 [2024-04-26 15:42:18.504853] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:48.255 2024/04/26 15:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:48.255 [2024-04-26 15:42:18.518191] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:48.255 [2024-04-26 15:42:18.518386] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:48.255 2024/04/26 15:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:48.255 [2024-04-26 15:42:18.533764] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:48.255 [2024-04-26 15:42:18.534050] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:48.255 2024/04/26 15:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:48.513 [2024-04-26 15:42:18.549979] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:48.513 [2024-04-26 15:42:18.550232] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:48.513 2024/04/26 15:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:48.513 [2024-04-26 15:42:18.567994] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:48.513 [2024-04-26 15:42:18.568065] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:48.513 2024/04/26 15:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:48.513 [2024-04-26 15:42:18.585224] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:48.513 [2024-04-26 15:42:18.585290] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:48.513 2024/04/26 15:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:48.513 [2024-04-26 15:42:18.597024] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:48.513 [2024-04-26 15:42:18.597216] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:48.513 2024/04/26 15:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:48.513 [2024-04-26 15:42:18.611644] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:48.513 [2024-04-26 15:42:18.611848] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:48.513 2024/04/26 15:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:48.513 [2024-04-26 15:42:18.627323] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:48.513 [2024-04-26 15:42:18.627522] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:48.513 2024/04/26 15:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:48.513 [2024-04-26 15:42:18.642388] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:48.513 [2024-04-26 15:42:18.642589] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:48.513 2024/04/26 15:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:48.513 [2024-04-26 15:42:18.659806] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:48.514 [2024-04-26 15:42:18.660010] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:48.514 2024/04/26 15:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:48.514 [2024-04-26 15:42:18.675409] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:48.514 [2024-04-26 15:42:18.675745] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:48.514 2024/04/26 15:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:48.514 [2024-04-26 15:42:18.693213] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:48.514 [2024-04-26 15:42:18.693418] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:48.514 2024/04/26 15:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:48.514 [2024-04-26 15:42:18.708291] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:48.514 [2024-04-26 15:42:18.708587] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:48.514 2024/04/26 15:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:48.514 [2024-04-26 15:42:18.725370] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:48.514 [2024-04-26 15:42:18.725727] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:48.514 2024/04/26 15:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:48.514 [2024-04-26 15:42:18.741226] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:48.514 [2024-04-26 15:42:18.741546] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:48.514 2024/04/26 15:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:48.514 [2024-04-26 15:42:18.758871] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:48.514 [2024-04-26 15:42:18.759217] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:48.514 2024/04/26 15:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:48.514 [2024-04-26 15:42:18.774573] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:48.514 [2024-04-26 15:42:18.774888] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:48.514 2024/04/26 15:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:48.514 [2024-04-26 15:42:18.784794] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:48.514 [2024-04-26 15:42:18.784961] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:48.514 2024/04/26 15:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:48.514 [2024-04-26 15:42:18.800076] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:48.514 [2024-04-26 15:42:18.800309] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:48.514 2024/04/26 15:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:48.772 [2024-04-26 15:42:18.816233] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:48.772 [2024-04-26 15:42:18.816441] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:48.772 2024/04/26 15:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:48.772 [2024-04-26 15:42:18.833859] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:48.772 [2024-04-26 15:42:18.834068] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:48.772 2024/04/26 15:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:48.772 [2024-04-26 15:42:18.849108] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:48.772 [2024-04-26 15:42:18.849331] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:48.772 2024/04/26 15:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:48.772 [2024-04-26 15:42:18.869323] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:48.772 [2024-04-26 15:42:18.869619] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:48.772 2024/04/26 15:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:48.772 [2024-04-26 15:42:18.888820] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:48.772 [2024-04-26 15:42:18.889150] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:48.772 2024/04/26 15:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:48.772 [2024-04-26 15:42:18.905235] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:48.772 [2024-04-26 15:42:18.905456] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:48.772 2024/04/26 15:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:48.772 [2024-04-26 15:42:18.924163] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:48.772 [2024-04-26 15:42:18.924232] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:48.772 2024/04/26 15:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:48.772 [2024-04-26 15:42:18.938130] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:48.772 [2024-04-26 15:42:18.938381] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:48.772 2024/04/26 15:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:48.772 [2024-04-26 15:42:18.956059] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:48.772 [2024-04-26 15:42:18.956349] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:48.772 2024/04/26 15:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:48.772 [2024-04-26 15:42:18.970491] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:48.772 [2024-04-26 15:42:18.970541] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:48.772 2024/04/26 15:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:48.772 [2024-04-26 15:42:18.987471] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:48.772 [2024-04-26 15:42:18.987529] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:48.772 2024/04/26 15:42:18 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:48.772 [2024-04-26 15:42:19.003231] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:48.772 [2024-04-26 15:42:19.003457] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:48.772 2024/04/26 15:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:48.772 [2024-04-26 15:42:19.019784] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:48.772 [2024-04-26 15:42:19.020018] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:48.772 2024/04/26 15:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:48.772 [2024-04-26 15:42:19.037218] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:48.772 [2024-04-26 15:42:19.037445] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:48.772 2024/04/26 15:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:48.772 [2024-04-26 15:42:19.053465] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:48.772 [2024-04-26 15:42:19.053704] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:48.772 2024/04/26 15:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:49.034 [2024-04-26 15:42:19.068953] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:49.034 [2024-04-26 15:42:19.069174] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:49.034 2024/04/26 15:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:49.034 [2024-04-26 15:42:19.079250] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:49.034 [2024-04-26 15:42:19.079459] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:49.034 2024/04/26 15:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:49.034 [2024-04-26 15:42:19.093899] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:49.035 [2024-04-26 15:42:19.094120] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:49.035 2024/04/26 15:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:49.035 [2024-04-26 15:42:19.111705] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:49.035 [2024-04-26 15:42:19.111760] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:49.035 2024/04/26 15:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:49.035 [2024-04-26 15:42:19.125987] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:49.035 [2024-04-26 15:42:19.126039] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:49.035 2024/04/26 15:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:49.035 [2024-04-26 15:42:19.141771] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:49.035 [2024-04-26 15:42:19.141980] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:49.035 2024/04/26 15:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:49.035 [2024-04-26 15:42:19.158264] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:49.035 [2024-04-26 15:42:19.158461] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:49.035 2024/04/26 15:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:49.035 [2024-04-26 15:42:19.175631] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:49.035 [2024-04-26 15:42:19.175847] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:49.035 2024/04/26 15:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:49.035 [2024-04-26 15:42:19.191047] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:49.035 [2024-04-26 15:42:19.191260] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:49.035 2024/04/26 15:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:49.035 [2024-04-26 15:42:19.201310] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:49.035 [2024-04-26 15:42:19.201352] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:49.035 2024/04/26 15:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:49.035 [2024-04-26 15:42:19.212762] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:49.035 [2024-04-26 15:42:19.212805] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:49.035 2024/04/26 15:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:49.035 [2024-04-26 15:42:19.223094] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:49.035 [2024-04-26 15:42:19.223280] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:49.035 2024/04/26 15:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:49.035 [2024-04-26 15:42:19.234265] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:49.035 [2024-04-26 15:42:19.234455] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:49.035 2024/04/26 15:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:49.035 [2024-04-26 15:42:19.251240] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:49.035 [2024-04-26 15:42:19.251446] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:49.035 2024/04/26 15:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:49.035 [2024-04-26 15:42:19.261670] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:49.035 [2024-04-26 15:42:19.261855] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:49.035 2024/04/26 15:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:49.035 [2024-04-26 15:42:19.272984] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:49.035 [2024-04-26 15:42:19.273176] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:49.035 2024/04/26 15:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:49.035 [2024-04-26 15:42:19.286057] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:49.035 [2024-04-26 15:42:19.286245] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:49.035 2024/04/26 15:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:49.035 [2024-04-26 15:42:19.301845] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:49.035 [2024-04-26 15:42:19.301891] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:49.035 2024/04/26 15:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:49.035 [2024-04-26 15:42:19.316961] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:49.035 [2024-04-26 15:42:19.317009] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:49.035 2024/04/26 15:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:49.299 [2024-04-26 15:42:19.334126] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:49.299 [2024-04-26 15:42:19.334339] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:49.300 2024/04/26 15:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:49.300 [2024-04-26 15:42:19.349004] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:49.300 [2024-04-26 15:42:19.349220] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:49.300 2024/04/26 15:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:49.300 [2024-04-26 15:42:19.364836] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:49.300 [2024-04-26 15:42:19.365031] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:49.300 2024/04/26 15:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:49.300 [2024-04-26 15:42:19.374593] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:49.300 [2024-04-26 15:42:19.374636] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:49.300 2024/04/26 15:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:49.300 [2024-04-26 15:42:19.388043] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:49.300 [2024-04-26 15:42:19.388095] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:49.300 2024/04/26 15:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:49.300 [2024-04-26 15:42:19.398127] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:49.300 [2024-04-26 15:42:19.398181] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:49.300 2024/04/26 15:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:49.300 [2024-04-26 15:42:19.408766] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:49.300 [2024-04-26 15:42:19.408808] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:49.300 2024/04/26 15:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:49.300 [2024-04-26 15:42:19.421218] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:49.300 [2024-04-26 15:42:19.421263] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:49.300 2024/04/26 15:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:49.300 [2024-04-26 15:42:19.431129] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:49.300 [2024-04-26 15:42:19.431186] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:49.300 2024/04/26 15:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:49.300 [2024-04-26 15:42:19.441576] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:49.300 [2024-04-26 15:42:19.441621] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:49.300 2024/04/26 15:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:49.300 [2024-04-26 15:42:19.452379] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:49.300 [2024-04-26 15:42:19.452424] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:49.300 2024/04/26 15:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:49.300 [2024-04-26 15:42:19.465096] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:49.300 [2024-04-26 15:42:19.465149] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:49.300 2024/04/26 15:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:49.300 [2024-04-26 15:42:19.474686] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:49.300 [2024-04-26 15:42:19.474728] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:49.300 2024/04/26 15:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:49.300 [2024-04-26 15:42:19.485660] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:49.300 [2024-04-26 15:42:19.485706] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:49.300 2024/04/26 15:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:49.300 [2024-04-26 15:42:19.502048] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:49.300 [2024-04-26 15:42:19.502102] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:49.300 2024/04/26 15:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:49.300 [2024-04-26 15:42:19.507122] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:49.300 [2024-04-26 15:42:19.507162] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:49.300 00:23:49.300 Latency(us) 00:23:49.300 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:49.300 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:23:49.300 Nvme1n1 : 5.01 11176.54 87.32 0.00 0.00 11436.64 4438.57 30265.72 00:23:49.300 =================================================================================================================== 00:23:49.300 Total : 11176.54 87.32 0.00 0.00 11436.64 4438.57 30265.72 00:23:49.300 2024/04/26 15:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:49.300 [2024-04-26 15:42:19.515182] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:49.300 [2024-04-26 15:42:19.515232] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:49.300 2024/04/26 15:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:49.300 [2024-04-26 15:42:19.523168] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:49.300 [2024-04-26 15:42:19.523207] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:49.300 2024/04/26 15:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:49.300 [2024-04-26 15:42:19.531173] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:49.300 [2024-04-26 15:42:19.531215] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:49.300 2024/04/26 15:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:49.300 [2024-04-26 15:42:19.543193] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:49.300 [2024-04-26 15:42:19.543239] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:49.300 2024/04/26 15:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:49.300 [2024-04-26 15:42:19.555197] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:49.300 [2024-04-26 15:42:19.555243] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:49.300 2024/04/26 15:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:49.300 [2024-04-26 15:42:19.567208] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:49.300 [2024-04-26 15:42:19.567255] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:49.300 2024/04/26 15:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:49.300 [2024-04-26 15:42:19.579221] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:49.300 [2024-04-26 15:42:19.579278] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:49.300 2024/04/26 15:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:49.300 [2024-04-26 15:42:19.591217] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:49.300 [2024-04-26 15:42:19.591267] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:49.559 2024/04/26 15:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:49.559 [2024-04-26 15:42:19.603232] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:49.559 [2024-04-26 15:42:19.603285] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:49.559 2024/04/26 15:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:49.559 [2024-04-26 15:42:19.615217] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:49.559 [2024-04-26 15:42:19.615267] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:49.559 2024/04/26 15:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:49.559 [2024-04-26 15:42:19.627252] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:49.559 [2024-04-26 15:42:19.627314] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:49.560 2024/04/26 15:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:49.560 [2024-04-26 15:42:19.639212] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:49.560 [2024-04-26 15:42:19.639265] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:49.560 2024/04/26 15:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:49.560 [2024-04-26 15:42:19.651210] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:49.560 [2024-04-26 15:42:19.651252] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:49.560 2024/04/26 15:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:49.560 [2024-04-26 15:42:19.663210] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:49.560 [2024-04-26 15:42:19.663249] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:49.560 2024/04/26 15:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:49.560 [2024-04-26 15:42:19.675252] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:49.560 [2024-04-26 15:42:19.675304] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:49.560 2024/04/26 15:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:49.560 [2024-04-26 15:42:19.687268] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:49.560 [2024-04-26 15:42:19.687316] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:49.560 2024/04/26 15:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:49.560 [2024-04-26 15:42:19.699277] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:49.560 [2024-04-26 15:42:19.699325] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:49.560 2024/04/26 15:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:49.560 [2024-04-26 15:42:19.711241] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:49.560 [2024-04-26 15:42:19.711289] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:49.560 2024/04/26 15:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:49.560 [2024-04-26 15:42:19.723233] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:49.560 [2024-04-26 15:42:19.723277] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:49.560 2024/04/26 15:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:49.560 [2024-04-26 15:42:19.735244] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:49.560 [2024-04-26 15:42:19.735291] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:49.560 2024/04/26 15:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:49.560 [2024-04-26 15:42:19.747249] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:49.560 [2024-04-26 15:42:19.747294] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:49.560 2024/04/26 15:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:49.560 [2024-04-26 15:42:19.755233] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:49.560 [2024-04-26 15:42:19.755271] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:49.560 2024/04/26 15:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:49.560 [2024-04-26 15:42:19.767238] subsystem.c:1900:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:23:49.560 [2024-04-26 15:42:19.767276] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:49.560 2024/04/26 15:42:19 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:malloc0 no_auto_visible:%!s(bool=false) nsid:1] nqn:nqn.2016-06.io.spdk:cnode1], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:49.560 /home/vagrant/spdk_repo/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (74967) - No such process 00:23:49.560 15:42:19 -- target/zcopy.sh@49 -- # wait 74967 00:23:49.560 15:42:19 -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:23:49.560 15:42:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:49.560 15:42:19 -- common/autotest_common.sh@10 -- # set +x 00:23:49.560 15:42:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:49.560 15:42:19 -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:23:49.560 15:42:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:49.560 15:42:19 -- common/autotest_common.sh@10 -- # set +x 00:23:49.560 delay0 00:23:49.560 15:42:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:49.560 15:42:19 -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:23:49.560 15:42:19 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:49.560 15:42:19 -- common/autotest_common.sh@10 -- # set +x 00:23:49.560 15:42:19 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:49.560 15:42:19 -- target/zcopy.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:23:49.818 [2024-04-26 15:42:19.961225] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:23:57.923 Initializing NVMe Controllers 00:23:57.923 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:57.923 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:57.923 Initialization complete. Launching workers. 00:23:57.923 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 240, failed: 23429 00:23:57.923 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 23560, failed to submit 109 00:23:57.923 success 23460, unsuccess 100, failed 0 00:23:57.923 15:42:26 -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:23:57.923 15:42:26 -- target/zcopy.sh@60 -- # nvmftestfini 00:23:57.923 15:42:26 -- nvmf/common.sh@477 -- # nvmfcleanup 00:23:57.923 15:42:26 -- nvmf/common.sh@117 -- # sync 00:23:57.923 15:42:27 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:57.923 15:42:27 -- nvmf/common.sh@120 -- # set +e 00:23:57.923 15:42:27 -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:57.923 15:42:27 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:57.923 rmmod nvme_tcp 00:23:57.923 rmmod nvme_fabrics 00:23:57.923 rmmod nvme_keyring 00:23:57.923 15:42:27 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:57.923 15:42:27 -- nvmf/common.sh@124 -- # set -e 00:23:57.923 15:42:27 -- nvmf/common.sh@125 -- # return 0 00:23:57.923 15:42:27 -- nvmf/common.sh@478 -- # '[' -n 74792 ']' 00:23:57.923 15:42:27 -- nvmf/common.sh@479 -- # killprocess 74792 00:23:57.923 15:42:27 -- common/autotest_common.sh@936 -- # '[' -z 74792 ']' 00:23:57.923 15:42:27 -- common/autotest_common.sh@940 -- # kill -0 74792 00:23:57.923 15:42:27 -- common/autotest_common.sh@941 -- # uname 00:23:57.923 15:42:27 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:23:57.923 15:42:27 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 74792 00:23:57.923 15:42:27 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:23:57.923 15:42:27 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:23:57.923 15:42:27 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 74792' 00:23:57.923 killing process with pid 74792 00:23:57.923 15:42:27 -- common/autotest_common.sh@955 -- # kill 74792 00:23:57.923 15:42:27 -- common/autotest_common.sh@960 -- # wait 74792 00:23:57.923 15:42:27 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:23:57.923 15:42:27 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:23:57.923 15:42:27 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:23:57.923 15:42:27 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:57.923 15:42:27 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:57.923 15:42:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:57.923 15:42:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:57.923 15:42:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:57.923 15:42:27 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:23:57.923 ************************************ 00:23:57.923 END TEST nvmf_zcopy 00:23:57.923 ************************************ 00:23:57.923 00:23:57.924 real 0m25.954s 00:23:57.924 user 0m40.300s 00:23:57.924 sys 0m7.897s 00:23:57.924 15:42:27 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:23:57.924 15:42:27 -- common/autotest_common.sh@10 -- # set +x 00:23:57.924 15:42:27 -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:23:57.924 15:42:27 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:23:57.924 15:42:27 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:23:57.924 15:42:27 -- common/autotest_common.sh@10 -- # set +x 00:23:57.924 ************************************ 00:23:57.924 START TEST nvmf_nmic 00:23:57.924 ************************************ 00:23:57.924 15:42:27 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:23:57.924 * Looking for test storage... 00:23:57.924 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:23:57.924 15:42:27 -- target/nmic.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:23:57.924 15:42:27 -- nvmf/common.sh@7 -- # uname -s 00:23:57.924 15:42:27 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:57.924 15:42:27 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:57.924 15:42:27 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:57.924 15:42:27 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:57.924 15:42:27 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:57.924 15:42:27 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:57.924 15:42:27 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:57.924 15:42:27 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:57.924 15:42:27 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:57.924 15:42:27 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:57.924 15:42:27 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:23:57.924 15:42:27 -- nvmf/common.sh@18 -- # NVME_HOSTID=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:23:57.924 15:42:27 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:57.924 15:42:27 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:57.924 15:42:27 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:23:57.924 15:42:27 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:57.924 15:42:27 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:23:57.924 15:42:27 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:57.924 15:42:27 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:57.924 15:42:27 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:57.924 15:42:27 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.924 15:42:27 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.924 15:42:27 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.924 15:42:27 -- paths/export.sh@5 -- # export PATH 00:23:57.924 15:42:27 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:57.924 15:42:27 -- nvmf/common.sh@47 -- # : 0 00:23:57.924 15:42:27 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:57.924 15:42:27 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:57.924 15:42:27 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:57.924 15:42:27 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:57.924 15:42:27 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:57.924 15:42:27 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:57.924 15:42:27 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:57.924 15:42:27 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:57.924 15:42:27 -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:57.924 15:42:27 -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:57.924 15:42:27 -- target/nmic.sh@14 -- # nvmftestinit 00:23:57.924 15:42:27 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:23:57.924 15:42:27 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:57.924 15:42:27 -- nvmf/common.sh@437 -- # prepare_net_devs 00:23:57.924 15:42:27 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:23:57.924 15:42:27 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:23:57.924 15:42:27 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:57.924 15:42:27 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:57.924 15:42:27 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:57.924 15:42:27 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:23:57.924 15:42:27 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:23:57.924 15:42:27 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:23:57.924 15:42:27 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:23:57.924 15:42:27 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:23:57.924 15:42:27 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:23:57.924 15:42:27 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:57.924 15:42:27 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:57.924 15:42:27 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:23:57.924 15:42:27 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:23:57.924 15:42:27 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:23:57.924 15:42:27 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:23:57.924 15:42:27 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:23:57.924 15:42:27 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:57.924 15:42:27 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:23:57.924 15:42:27 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:23:57.924 15:42:27 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:23:57.924 15:42:27 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:23:57.924 15:42:27 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:23:57.924 15:42:27 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:23:57.924 Cannot find device "nvmf_tgt_br" 00:23:57.924 15:42:27 -- nvmf/common.sh@155 -- # true 00:23:57.924 15:42:27 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:23:57.924 Cannot find device "nvmf_tgt_br2" 00:23:57.924 15:42:27 -- nvmf/common.sh@156 -- # true 00:23:57.924 15:42:27 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:23:57.924 15:42:27 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:23:57.924 Cannot find device "nvmf_tgt_br" 00:23:57.924 15:42:27 -- nvmf/common.sh@158 -- # true 00:23:57.924 15:42:27 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:23:57.924 Cannot find device "nvmf_tgt_br2" 00:23:57.924 15:42:27 -- nvmf/common.sh@159 -- # true 00:23:57.924 15:42:27 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:23:57.924 15:42:27 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:23:57.924 15:42:27 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:23:57.924 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:57.924 15:42:27 -- nvmf/common.sh@162 -- # true 00:23:57.924 15:42:27 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:23:57.924 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:23:57.924 15:42:27 -- nvmf/common.sh@163 -- # true 00:23:57.924 15:42:27 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:23:57.924 15:42:27 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:23:57.924 15:42:27 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:23:57.924 15:42:27 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:23:57.924 15:42:27 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:23:57.924 15:42:27 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:23:57.924 15:42:27 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:23:57.924 15:42:27 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:23:57.924 15:42:27 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:23:57.924 15:42:27 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:23:57.924 15:42:27 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:23:57.924 15:42:27 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:23:57.924 15:42:27 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:23:57.924 15:42:27 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:23:57.924 15:42:27 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:23:57.924 15:42:27 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:23:57.924 15:42:27 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:23:57.924 15:42:27 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:23:57.924 15:42:27 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:23:57.924 15:42:27 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:23:57.924 15:42:27 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:23:57.924 15:42:27 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:23:57.924 15:42:27 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:23:57.924 15:42:27 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:23:57.924 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:57.924 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:23:57.924 00:23:57.924 --- 10.0.0.2 ping statistics --- 00:23:57.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:57.924 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:23:57.924 15:42:27 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:23:57.924 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:23:57.924 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:23:57.924 00:23:57.925 --- 10.0.0.3 ping statistics --- 00:23:57.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:57.925 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:23:57.925 15:42:27 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:23:57.925 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:57.925 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:23:57.925 00:23:57.925 --- 10.0.0.1 ping statistics --- 00:23:57.925 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:57.925 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:23:57.925 15:42:27 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:57.925 15:42:27 -- nvmf/common.sh@422 -- # return 0 00:23:57.925 15:42:27 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:23:57.925 15:42:27 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:57.925 15:42:27 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:23:57.925 15:42:27 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:23:57.925 15:42:27 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:57.925 15:42:27 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:23:57.925 15:42:27 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:23:57.925 15:42:27 -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:23:57.925 15:42:27 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:23:57.925 15:42:27 -- common/autotest_common.sh@710 -- # xtrace_disable 00:23:57.925 15:42:27 -- common/autotest_common.sh@10 -- # set +x 00:23:57.925 15:42:27 -- nvmf/common.sh@470 -- # nvmfpid=75303 00:23:57.925 15:42:27 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:57.925 15:42:27 -- nvmf/common.sh@471 -- # waitforlisten 75303 00:23:57.925 15:42:27 -- common/autotest_common.sh@817 -- # '[' -z 75303 ']' 00:23:57.925 15:42:27 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:57.925 15:42:27 -- common/autotest_common.sh@822 -- # local max_retries=100 00:23:57.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:57.925 15:42:27 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:57.925 15:42:27 -- common/autotest_common.sh@826 -- # xtrace_disable 00:23:57.925 15:42:27 -- common/autotest_common.sh@10 -- # set +x 00:23:57.925 [2024-04-26 15:42:28.055702] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:23:57.925 [2024-04-26 15:42:28.055838] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:57.925 [2024-04-26 15:42:28.198180] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:58.182 [2024-04-26 15:42:28.334559] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:58.182 [2024-04-26 15:42:28.334630] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:58.182 [2024-04-26 15:42:28.334643] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:58.182 [2024-04-26 15:42:28.334652] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:58.182 [2024-04-26 15:42:28.334659] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:58.182 [2024-04-26 15:42:28.334976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:23:58.182 [2024-04-26 15:42:28.335055] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:23:58.182 [2024-04-26 15:42:28.335239] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:23:58.182 [2024-04-26 15:42:28.335513] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:23:59.183 15:42:29 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:23:59.183 15:42:29 -- common/autotest_common.sh@850 -- # return 0 00:23:59.183 15:42:29 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:23:59.183 15:42:29 -- common/autotest_common.sh@716 -- # xtrace_disable 00:23:59.183 15:42:29 -- common/autotest_common.sh@10 -- # set +x 00:23:59.183 15:42:29 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:59.183 15:42:29 -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:59.183 15:42:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:59.183 15:42:29 -- common/autotest_common.sh@10 -- # set +x 00:23:59.183 [2024-04-26 15:42:29.196063] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:59.183 15:42:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:59.183 15:42:29 -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:23:59.183 15:42:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:59.183 15:42:29 -- common/autotest_common.sh@10 -- # set +x 00:23:59.183 Malloc0 00:23:59.183 15:42:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:59.183 15:42:29 -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:23:59.183 15:42:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:59.183 15:42:29 -- common/autotest_common.sh@10 -- # set +x 00:23:59.183 15:42:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:59.183 15:42:29 -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:59.183 15:42:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:59.183 15:42:29 -- common/autotest_common.sh@10 -- # set +x 00:23:59.183 15:42:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:59.183 15:42:29 -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:59.183 15:42:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:59.183 15:42:29 -- common/autotest_common.sh@10 -- # set +x 00:23:59.183 [2024-04-26 15:42:29.260087] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:59.183 15:42:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:59.183 15:42:29 -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:23:59.183 test case1: single bdev can't be used in multiple subsystems 00:23:59.183 15:42:29 -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:23:59.183 15:42:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:59.183 15:42:29 -- common/autotest_common.sh@10 -- # set +x 00:23:59.183 15:42:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:59.183 15:42:29 -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:23:59.183 15:42:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:59.183 15:42:29 -- common/autotest_common.sh@10 -- # set +x 00:23:59.183 15:42:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:59.183 15:42:29 -- target/nmic.sh@28 -- # nmic_status=0 00:23:59.183 15:42:29 -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:23:59.183 15:42:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:59.183 15:42:29 -- common/autotest_common.sh@10 -- # set +x 00:23:59.183 [2024-04-26 15:42:29.283879] bdev.c:7995:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:23:59.183 [2024-04-26 15:42:29.283941] subsystem.c:1934:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:23:59.183 [2024-04-26 15:42:29.283959] nvmf_rpc.c:1534:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:23:59.183 2024/04/26 15:42:29 error on JSON-RPC call, method: nvmf_subsystem_add_ns, params: map[namespace:map[bdev_name:Malloc0 no_auto_visible:%!s(bool=false)] nqn:nqn.2016-06.io.spdk:cnode2], err: error received for nvmf_subsystem_add_ns method, err: Code=-32602 Msg=Invalid parameters 00:23:59.183 request: 00:23:59.183 { 00:23:59.183 "method": "nvmf_subsystem_add_ns", 00:23:59.183 "params": { 00:23:59.183 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:23:59.183 "namespace": { 00:23:59.183 "bdev_name": "Malloc0", 00:23:59.183 "no_auto_visible": false 00:23:59.183 } 00:23:59.183 } 00:23:59.183 } 00:23:59.183 Got JSON-RPC error response 00:23:59.183 GoRPCClient: error on JSON-RPC call 00:23:59.183 15:42:29 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:23:59.183 15:42:29 -- target/nmic.sh@29 -- # nmic_status=1 00:23:59.183 15:42:29 -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:23:59.183 Adding namespace failed - expected result. 00:23:59.183 15:42:29 -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:23:59.183 test case2: host connect to nvmf target in multiple paths 00:23:59.183 15:42:29 -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:23:59.183 15:42:29 -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:59.183 15:42:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:23:59.183 15:42:29 -- common/autotest_common.sh@10 -- # set +x 00:23:59.183 [2024-04-26 15:42:29.296113] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:59.183 15:42:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:23:59.183 15:42:29 -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 --hostid=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:23:59.183 15:42:29 -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 --hostid=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:23:59.442 15:42:29 -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:23:59.442 15:42:29 -- common/autotest_common.sh@1184 -- # local i=0 00:23:59.442 15:42:29 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:23:59.442 15:42:29 -- common/autotest_common.sh@1186 -- # [[ -n '' ]] 00:23:59.442 15:42:29 -- common/autotest_common.sh@1191 -- # sleep 2 00:24:01.340 15:42:31 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:24:01.598 15:42:31 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:24:01.598 15:42:31 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:24:01.598 15:42:31 -- common/autotest_common.sh@1193 -- # nvme_devices=1 00:24:01.598 15:42:31 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:24:01.598 15:42:31 -- common/autotest_common.sh@1194 -- # return 0 00:24:01.598 15:42:31 -- target/nmic.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:24:01.598 [global] 00:24:01.598 thread=1 00:24:01.598 invalidate=1 00:24:01.598 rw=write 00:24:01.598 time_based=1 00:24:01.598 runtime=1 00:24:01.598 ioengine=libaio 00:24:01.598 direct=1 00:24:01.598 bs=4096 00:24:01.598 iodepth=1 00:24:01.598 norandommap=0 00:24:01.598 numjobs=1 00:24:01.598 00:24:01.598 verify_dump=1 00:24:01.598 verify_backlog=512 00:24:01.598 verify_state_save=0 00:24:01.598 do_verify=1 00:24:01.598 verify=crc32c-intel 00:24:01.598 [job0] 00:24:01.598 filename=/dev/nvme0n1 00:24:01.598 Could not set queue depth (nvme0n1) 00:24:01.598 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:01.598 fio-3.35 00:24:01.598 Starting 1 thread 00:24:02.971 00:24:02.971 job0: (groupid=0, jobs=1): err= 0: pid=75407: Fri Apr 26 15:42:32 2024 00:24:02.971 read: IOPS=1573, BW=6294KiB/s (6445kB/s)(6300KiB/1001msec) 00:24:02.971 slat (nsec): min=14123, max=78102, avg=25904.78, stdev=9405.85 00:24:02.971 clat (usec): min=136, max=1391, avg=288.70, stdev=101.41 00:24:02.971 lat (usec): min=151, max=1434, avg=314.61, stdev=108.31 00:24:02.971 clat percentiles (usec): 00:24:02.971 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 147], 20.00th=[ 155], 00:24:02.971 | 30.00th=[ 167], 40.00th=[ 322], 50.00th=[ 338], 60.00th=[ 347], 00:24:02.971 | 70.00th=[ 355], 80.00th=[ 367], 90.00th=[ 383], 95.00th=[ 392], 00:24:02.971 | 99.00th=[ 424], 99.50th=[ 441], 99.90th=[ 603], 99.95th=[ 1385], 00:24:02.971 | 99.99th=[ 1385] 00:24:02.971 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:24:02.971 slat (usec): min=20, max=242, avg=34.96, stdev=11.92 00:24:02.971 clat (usec): min=93, max=693, avg=206.36, stdev=81.87 00:24:02.971 lat (usec): min=116, max=735, avg=241.32, stdev=90.14 00:24:02.971 clat percentiles (usec): 00:24:02.971 | 1.00th=[ 97], 5.00th=[ 100], 10.00th=[ 103], 20.00th=[ 108], 00:24:02.971 | 30.00th=[ 118], 40.00th=[ 155], 50.00th=[ 251], 60.00th=[ 265], 00:24:02.971 | 70.00th=[ 273], 80.00th=[ 281], 90.00th=[ 293], 95.00th=[ 302], 00:24:02.971 | 99.00th=[ 326], 99.50th=[ 334], 99.90th=[ 363], 99.95th=[ 586], 00:24:02.971 | 99.99th=[ 693] 00:24:02.971 bw ( KiB/s): min= 9240, max= 9240, per=100.00%, avg=9240.00, stdev= 0.00, samples=1 00:24:02.971 iops : min= 2310, max= 2310, avg=2310.00, stdev= 0.00, samples=1 00:24:02.971 lat (usec) : 100=2.57%, 250=40.05%, 500=57.27%, 750=0.08% 00:24:02.971 lat (msec) : 2=0.03% 00:24:02.971 cpu : usr=1.80%, sys=8.80%, ctx=3623, majf=0, minf=2 00:24:02.971 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:02.971 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:02.971 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:02.971 issued rwts: total=1575,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:02.971 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:02.971 00:24:02.971 Run status group 0 (all jobs): 00:24:02.971 READ: bw=6294KiB/s (6445kB/s), 6294KiB/s-6294KiB/s (6445kB/s-6445kB/s), io=6300KiB (6451kB), run=1001-1001msec 00:24:02.971 WRITE: bw=8184KiB/s (8380kB/s), 8184KiB/s-8184KiB/s (8380kB/s-8380kB/s), io=8192KiB (8389kB), run=1001-1001msec 00:24:02.971 00:24:02.971 Disk stats (read/write): 00:24:02.971 nvme0n1: ios=1586/1777, merge=0/0, ticks=478/372, in_queue=850, util=91.18% 00:24:02.971 15:42:32 -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:02.971 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:24:02.971 15:42:32 -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:24:02.971 15:42:32 -- common/autotest_common.sh@1205 -- # local i=0 00:24:02.971 15:42:32 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:24:02.971 15:42:32 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:02.971 15:42:33 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:24:02.971 15:42:33 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:02.971 15:42:33 -- common/autotest_common.sh@1217 -- # return 0 00:24:02.971 15:42:33 -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:24:02.971 15:42:33 -- target/nmic.sh@53 -- # nvmftestfini 00:24:02.971 15:42:33 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:02.971 15:42:33 -- nvmf/common.sh@117 -- # sync 00:24:02.971 15:42:33 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:02.971 15:42:33 -- nvmf/common.sh@120 -- # set +e 00:24:02.971 15:42:33 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:02.971 15:42:33 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:02.971 rmmod nvme_tcp 00:24:02.971 rmmod nvme_fabrics 00:24:02.971 rmmod nvme_keyring 00:24:02.971 15:42:33 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:02.971 15:42:33 -- nvmf/common.sh@124 -- # set -e 00:24:02.971 15:42:33 -- nvmf/common.sh@125 -- # return 0 00:24:02.971 15:42:33 -- nvmf/common.sh@478 -- # '[' -n 75303 ']' 00:24:02.972 15:42:33 -- nvmf/common.sh@479 -- # killprocess 75303 00:24:02.972 15:42:33 -- common/autotest_common.sh@936 -- # '[' -z 75303 ']' 00:24:02.972 15:42:33 -- common/autotest_common.sh@940 -- # kill -0 75303 00:24:02.972 15:42:33 -- common/autotest_common.sh@941 -- # uname 00:24:02.972 15:42:33 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:02.972 15:42:33 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75303 00:24:02.972 killing process with pid 75303 00:24:02.972 15:42:33 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:02.972 15:42:33 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:02.972 15:42:33 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75303' 00:24:02.972 15:42:33 -- common/autotest_common.sh@955 -- # kill 75303 00:24:02.972 15:42:33 -- common/autotest_common.sh@960 -- # wait 75303 00:24:03.230 15:42:33 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:03.230 15:42:33 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:03.230 15:42:33 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:03.230 15:42:33 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:03.230 15:42:33 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:03.230 15:42:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:03.230 15:42:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:03.230 15:42:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:03.230 15:42:33 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:03.230 00:24:03.230 real 0m5.964s 00:24:03.230 user 0m19.984s 00:24:03.230 sys 0m1.363s 00:24:03.230 15:42:33 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:03.230 15:42:33 -- common/autotest_common.sh@10 -- # set +x 00:24:03.230 ************************************ 00:24:03.230 END TEST nvmf_nmic 00:24:03.230 ************************************ 00:24:03.491 15:42:33 -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:24:03.491 15:42:33 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:03.491 15:42:33 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:03.491 15:42:33 -- common/autotest_common.sh@10 -- # set +x 00:24:03.491 ************************************ 00:24:03.491 START TEST nvmf_fio_target 00:24:03.491 ************************************ 00:24:03.491 15:42:33 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/fio.sh --transport=tcp 00:24:03.491 * Looking for test storage... 00:24:03.491 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:03.491 15:42:33 -- target/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:03.491 15:42:33 -- nvmf/common.sh@7 -- # uname -s 00:24:03.491 15:42:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:03.491 15:42:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:03.491 15:42:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:03.491 15:42:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:03.491 15:42:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:03.491 15:42:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:03.491 15:42:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:03.491 15:42:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:03.491 15:42:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:03.491 15:42:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:03.491 15:42:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:24:03.491 15:42:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:24:03.491 15:42:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:03.491 15:42:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:03.491 15:42:33 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:03.491 15:42:33 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:03.491 15:42:33 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:03.491 15:42:33 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:03.491 15:42:33 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:03.491 15:42:33 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:03.491 15:42:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.491 15:42:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.491 15:42:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.491 15:42:33 -- paths/export.sh@5 -- # export PATH 00:24:03.492 15:42:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.492 15:42:33 -- nvmf/common.sh@47 -- # : 0 00:24:03.492 15:42:33 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:03.492 15:42:33 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:03.492 15:42:33 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:03.492 15:42:33 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:03.492 15:42:33 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:03.492 15:42:33 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:03.492 15:42:33 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:03.492 15:42:33 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:03.492 15:42:33 -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:03.492 15:42:33 -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:03.492 15:42:33 -- target/fio.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:03.492 15:42:33 -- target/fio.sh@16 -- # nvmftestinit 00:24:03.492 15:42:33 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:03.492 15:42:33 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:03.492 15:42:33 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:03.492 15:42:33 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:03.492 15:42:33 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:03.492 15:42:33 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:03.492 15:42:33 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:03.492 15:42:33 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:03.492 15:42:33 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:24:03.492 15:42:33 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:24:03.492 15:42:33 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:24:03.492 15:42:33 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:24:03.492 15:42:33 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:24:03.492 15:42:33 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:24:03.492 15:42:33 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:03.492 15:42:33 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:03.492 15:42:33 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:03.492 15:42:33 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:03.492 15:42:33 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:03.492 15:42:33 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:03.492 15:42:33 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:03.492 15:42:33 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:03.492 15:42:33 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:03.492 15:42:33 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:03.492 15:42:33 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:03.492 15:42:33 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:03.492 15:42:33 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:03.492 15:42:33 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:03.492 Cannot find device "nvmf_tgt_br" 00:24:03.492 15:42:33 -- nvmf/common.sh@155 -- # true 00:24:03.492 15:42:33 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:03.492 Cannot find device "nvmf_tgt_br2" 00:24:03.492 15:42:33 -- nvmf/common.sh@156 -- # true 00:24:03.492 15:42:33 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:03.492 15:42:33 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:03.492 Cannot find device "nvmf_tgt_br" 00:24:03.492 15:42:33 -- nvmf/common.sh@158 -- # true 00:24:03.492 15:42:33 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:03.492 Cannot find device "nvmf_tgt_br2" 00:24:03.492 15:42:33 -- nvmf/common.sh@159 -- # true 00:24:03.492 15:42:33 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:03.749 15:42:33 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:03.749 15:42:33 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:03.749 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:03.749 15:42:33 -- nvmf/common.sh@162 -- # true 00:24:03.749 15:42:33 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:03.749 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:03.749 15:42:33 -- nvmf/common.sh@163 -- # true 00:24:03.749 15:42:33 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:03.749 15:42:33 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:03.749 15:42:33 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:03.749 15:42:33 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:03.749 15:42:33 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:03.749 15:42:33 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:03.749 15:42:33 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:03.749 15:42:33 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:03.749 15:42:33 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:03.749 15:42:33 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:03.749 15:42:33 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:03.749 15:42:33 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:03.749 15:42:33 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:03.749 15:42:33 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:03.749 15:42:33 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:03.749 15:42:33 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:03.749 15:42:33 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:03.749 15:42:33 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:03.749 15:42:33 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:03.749 15:42:33 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:03.749 15:42:33 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:03.749 15:42:33 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:03.749 15:42:33 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:03.749 15:42:33 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:03.749 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:03.749 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.100 ms 00:24:03.749 00:24:03.749 --- 10.0.0.2 ping statistics --- 00:24:03.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:03.749 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:24:03.749 15:42:33 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:03.749 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:03.749 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:24:03.749 00:24:03.749 --- 10.0.0.3 ping statistics --- 00:24:03.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:03.749 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:24:03.749 15:42:33 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:03.749 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:03.749 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:24:03.749 00:24:03.749 --- 10.0.0.1 ping statistics --- 00:24:03.749 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:03.749 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:24:03.749 15:42:34 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:03.749 15:42:34 -- nvmf/common.sh@422 -- # return 0 00:24:03.749 15:42:34 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:03.749 15:42:34 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:03.749 15:42:34 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:03.749 15:42:34 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:03.749 15:42:34 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:03.749 15:42:34 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:03.749 15:42:34 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:03.749 15:42:34 -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:24:03.749 15:42:34 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:03.749 15:42:34 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:03.749 15:42:34 -- common/autotest_common.sh@10 -- # set +x 00:24:03.749 15:42:34 -- nvmf/common.sh@470 -- # nvmfpid=75595 00:24:03.749 15:42:34 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:03.749 15:42:34 -- nvmf/common.sh@471 -- # waitforlisten 75595 00:24:03.749 15:42:34 -- common/autotest_common.sh@817 -- # '[' -z 75595 ']' 00:24:03.749 15:42:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:03.749 15:42:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:03.749 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:03.749 15:42:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:03.749 15:42:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:03.749 15:42:34 -- common/autotest_common.sh@10 -- # set +x 00:24:04.007 [2024-04-26 15:42:34.094305] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:24:04.007 [2024-04-26 15:42:34.094440] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:04.007 [2024-04-26 15:42:34.236003] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:04.264 [2024-04-26 15:42:34.370595] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:04.264 [2024-04-26 15:42:34.370904] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:04.264 [2024-04-26 15:42:34.371072] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:04.264 [2024-04-26 15:42:34.371290] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:04.264 [2024-04-26 15:42:34.371421] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:04.264 [2024-04-26 15:42:34.371806] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:04.264 [2024-04-26 15:42:34.371901] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:04.264 [2024-04-26 15:42:34.371959] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:04.264 [2024-04-26 15:42:34.371965] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:05.196 15:42:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:05.196 15:42:35 -- common/autotest_common.sh@850 -- # return 0 00:24:05.197 15:42:35 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:05.197 15:42:35 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:05.197 15:42:35 -- common/autotest_common.sh@10 -- # set +x 00:24:05.197 15:42:35 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:05.197 15:42:35 -- target/fio.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:05.197 [2024-04-26 15:42:35.415831] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:05.197 15:42:35 -- target/fio.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:05.761 15:42:35 -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:24:05.761 15:42:35 -- target/fio.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:06.019 15:42:36 -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:24:06.019 15:42:36 -- target/fio.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:06.277 15:42:36 -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:24:06.277 15:42:36 -- target/fio.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:06.536 15:42:36 -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:24:06.536 15:42:36 -- target/fio.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:24:06.794 15:42:36 -- target/fio.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:07.052 15:42:37 -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:24:07.052 15:42:37 -- target/fio.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:07.373 15:42:37 -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:24:07.373 15:42:37 -- target/fio.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:24:07.632 15:42:37 -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:24:07.632 15:42:37 -- target/fio.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:24:07.890 15:42:38 -- target/fio.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:24:08.149 15:42:38 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:24:08.149 15:42:38 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:08.406 15:42:38 -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:24:08.406 15:42:38 -- target/fio.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:08.664 15:42:38 -- target/fio.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:08.921 [2024-04-26 15:42:39.016945] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:08.921 15:42:39 -- target/fio.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:24:09.178 15:42:39 -- target/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:24:09.434 15:42:39 -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 --hostid=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:09.692 15:42:39 -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:24:09.692 15:42:39 -- common/autotest_common.sh@1184 -- # local i=0 00:24:09.692 15:42:39 -- common/autotest_common.sh@1185 -- # local nvme_device_counter=1 nvme_devices=0 00:24:09.692 15:42:39 -- common/autotest_common.sh@1186 -- # [[ -n 4 ]] 00:24:09.692 15:42:39 -- common/autotest_common.sh@1187 -- # nvme_device_counter=4 00:24:09.692 15:42:39 -- common/autotest_common.sh@1191 -- # sleep 2 00:24:11.587 15:42:41 -- common/autotest_common.sh@1192 -- # (( i++ <= 15 )) 00:24:11.587 15:42:41 -- common/autotest_common.sh@1193 -- # lsblk -l -o NAME,SERIAL 00:24:11.587 15:42:41 -- common/autotest_common.sh@1193 -- # grep -c SPDKISFASTANDAWESOME 00:24:11.587 15:42:41 -- common/autotest_common.sh@1193 -- # nvme_devices=4 00:24:11.587 15:42:41 -- common/autotest_common.sh@1194 -- # (( nvme_devices == nvme_device_counter )) 00:24:11.587 15:42:41 -- common/autotest_common.sh@1194 -- # return 0 00:24:11.587 15:42:41 -- target/fio.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:24:11.587 [global] 00:24:11.587 thread=1 00:24:11.587 invalidate=1 00:24:11.587 rw=write 00:24:11.587 time_based=1 00:24:11.587 runtime=1 00:24:11.587 ioengine=libaio 00:24:11.587 direct=1 00:24:11.587 bs=4096 00:24:11.587 iodepth=1 00:24:11.587 norandommap=0 00:24:11.587 numjobs=1 00:24:11.587 00:24:11.587 verify_dump=1 00:24:11.587 verify_backlog=512 00:24:11.587 verify_state_save=0 00:24:11.587 do_verify=1 00:24:11.587 verify=crc32c-intel 00:24:11.587 [job0] 00:24:11.587 filename=/dev/nvme0n1 00:24:11.587 [job1] 00:24:11.587 filename=/dev/nvme0n2 00:24:11.587 [job2] 00:24:11.587 filename=/dev/nvme0n3 00:24:11.587 [job3] 00:24:11.588 filename=/dev/nvme0n4 00:24:11.588 Could not set queue depth (nvme0n1) 00:24:11.588 Could not set queue depth (nvme0n2) 00:24:11.588 Could not set queue depth (nvme0n3) 00:24:11.588 Could not set queue depth (nvme0n4) 00:24:11.845 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:11.845 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:11.845 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:11.845 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:11.845 fio-3.35 00:24:11.845 Starting 4 threads 00:24:13.218 00:24:13.218 job0: (groupid=0, jobs=1): err= 0: pid=75887: Fri Apr 26 15:42:43 2024 00:24:13.218 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:24:13.218 slat (nsec): min=11539, max=82720, avg=19223.31, stdev=7190.24 00:24:13.218 clat (usec): min=216, max=631, avg=293.00, stdev=42.59 00:24:13.218 lat (usec): min=253, max=648, avg=312.22, stdev=44.14 00:24:13.218 clat percentiles (usec): 00:24:13.218 | 1.00th=[ 243], 5.00th=[ 253], 10.00th=[ 260], 20.00th=[ 269], 00:24:13.218 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 285], 60.00th=[ 293], 00:24:13.218 | 70.00th=[ 302], 80.00th=[ 310], 90.00th=[ 330], 95.00th=[ 351], 00:24:13.218 | 99.00th=[ 465], 99.50th=[ 586], 99.90th=[ 611], 99.95th=[ 635], 00:24:13.218 | 99.99th=[ 635] 00:24:13.218 write: IOPS=1733, BW=6933KiB/s (7099kB/s)(6940KiB/1001msec); 0 zone resets 00:24:13.218 slat (usec): min=11, max=103, avg=31.12, stdev= 9.86 00:24:13.218 clat (usec): min=92, max=1976, avg=264.47, stdev=104.20 00:24:13.218 lat (usec): min=137, max=2025, avg=295.60, stdev=106.64 00:24:13.218 clat percentiles (usec): 00:24:13.218 | 1.00th=[ 135], 5.00th=[ 188], 10.00th=[ 194], 20.00th=[ 206], 00:24:13.218 | 30.00th=[ 215], 40.00th=[ 225], 50.00th=[ 233], 60.00th=[ 245], 00:24:13.218 | 70.00th=[ 269], 80.00th=[ 343], 90.00th=[ 371], 95.00th=[ 392], 00:24:13.218 | 99.00th=[ 494], 99.50th=[ 906], 99.90th=[ 1713], 99.95th=[ 1975], 00:24:13.218 | 99.99th=[ 1975] 00:24:13.218 bw ( KiB/s): min= 7944, max= 7944, per=24.70%, avg=7944.00, stdev= 0.00, samples=1 00:24:13.218 iops : min= 1986, max= 1986, avg=1986.00, stdev= 0.00, samples=1 00:24:13.218 lat (usec) : 100=0.03%, 250=35.22%, 500=63.86%, 750=0.46%, 1000=0.31% 00:24:13.218 lat (msec) : 2=0.12% 00:24:13.218 cpu : usr=2.00%, sys=6.30%, ctx=3292, majf=0, minf=8 00:24:13.218 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:13.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:13.218 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:13.218 issued rwts: total=1536,1735,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:13.218 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:13.218 job1: (groupid=0, jobs=1): err= 0: pid=75888: Fri Apr 26 15:42:43 2024 00:24:13.218 read: IOPS=2343, BW=9375KiB/s (9600kB/s)(9384KiB/1001msec) 00:24:13.218 slat (nsec): min=13081, max=60031, avg=20326.92, stdev=6573.21 00:24:13.218 clat (usec): min=135, max=481, avg=200.30, stdev=32.07 00:24:13.218 lat (usec): min=150, max=496, avg=220.63, stdev=35.26 00:24:13.218 clat percentiles (usec): 00:24:13.218 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 155], 20.00th=[ 163], 00:24:13.218 | 30.00th=[ 178], 40.00th=[ 202], 50.00th=[ 210], 60.00th=[ 217], 00:24:13.218 | 70.00th=[ 221], 80.00th=[ 227], 90.00th=[ 235], 95.00th=[ 243], 00:24:13.218 | 99.00th=[ 258], 99.50th=[ 265], 99.90th=[ 322], 99.95th=[ 408], 00:24:13.218 | 99.99th=[ 482] 00:24:13.218 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:24:13.218 slat (nsec): min=20064, max=98350, avg=30839.87, stdev=11957.23 00:24:13.218 clat (usec): min=100, max=1798, avg=153.43, stdev=41.29 00:24:13.218 lat (usec): min=122, max=1835, avg=184.27, stdev=46.50 00:24:13.218 clat percentiles (usec): 00:24:13.218 | 1.00th=[ 106], 5.00th=[ 113], 10.00th=[ 116], 20.00th=[ 125], 00:24:13.218 | 30.00th=[ 141], 40.00th=[ 151], 50.00th=[ 157], 60.00th=[ 163], 00:24:13.218 | 70.00th=[ 167], 80.00th=[ 174], 90.00th=[ 184], 95.00th=[ 192], 00:24:13.218 | 99.00th=[ 206], 99.50th=[ 217], 99.90th=[ 249], 99.95th=[ 351], 00:24:13.218 | 99.99th=[ 1795] 00:24:13.218 bw ( KiB/s): min= 9304, max= 9304, per=28.93%, avg=9304.00, stdev= 0.00, samples=1 00:24:13.218 iops : min= 2326, max= 2326, avg=2326.00, stdev= 0.00, samples=1 00:24:13.218 lat (usec) : 250=99.04%, 500=0.94% 00:24:13.218 lat (msec) : 2=0.02% 00:24:13.218 cpu : usr=2.30%, sys=9.40%, ctx=4906, majf=0, minf=3 00:24:13.218 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:13.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:13.218 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:13.218 issued rwts: total=2346,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:13.218 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:13.218 job2: (groupid=0, jobs=1): err= 0: pid=75889: Fri Apr 26 15:42:43 2024 00:24:13.218 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:24:13.218 slat (usec): min=13, max=105, avg=20.86, stdev= 5.45 00:24:13.218 clat (usec): min=162, max=524, avg=249.13, stdev=30.50 00:24:13.218 lat (usec): min=178, max=539, avg=269.99, stdev=31.71 00:24:13.218 clat percentiles (usec): 00:24:13.218 | 1.00th=[ 172], 5.00th=[ 182], 10.00th=[ 198], 20.00th=[ 235], 00:24:13.218 | 30.00th=[ 241], 40.00th=[ 247], 50.00th=[ 253], 60.00th=[ 260], 00:24:13.218 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 281], 95.00th=[ 289], 00:24:13.218 | 99.00th=[ 310], 99.50th=[ 318], 99.90th=[ 338], 99.95th=[ 347], 00:24:13.218 | 99.99th=[ 529] 00:24:13.218 write: IOPS=2075, BW=8304KiB/s (8503kB/s)(8312KiB/1001msec); 0 zone resets 00:24:13.218 slat (usec): min=19, max=103, avg=29.55, stdev= 6.33 00:24:13.218 clat (usec): min=109, max=511, avg=181.35, stdev=23.76 00:24:13.218 lat (usec): min=137, max=533, avg=210.90, stdev=26.03 00:24:13.218 clat percentiles (usec): 00:24:13.218 | 1.00th=[ 121], 5.00th=[ 135], 10.00th=[ 151], 20.00th=[ 167], 00:24:13.218 | 30.00th=[ 174], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 188], 00:24:13.218 | 70.00th=[ 192], 80.00th=[ 198], 90.00th=[ 208], 95.00th=[ 215], 00:24:13.218 | 99.00th=[ 231], 99.50th=[ 235], 99.90th=[ 289], 99.95th=[ 334], 00:24:13.218 | 99.99th=[ 510] 00:24:13.218 bw ( KiB/s): min= 8536, max= 8536, per=26.54%, avg=8536.00, stdev= 0.00, samples=1 00:24:13.218 iops : min= 2134, max= 2134, avg=2134.00, stdev= 0.00, samples=1 00:24:13.218 lat (usec) : 250=72.30%, 500=27.65%, 750=0.05% 00:24:13.218 cpu : usr=1.80%, sys=7.80%, ctx=4126, majf=0, minf=9 00:24:13.218 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:13.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:13.218 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:13.218 issued rwts: total=2048,2078,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:13.218 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:13.218 job3: (groupid=0, jobs=1): err= 0: pid=75890: Fri Apr 26 15:42:43 2024 00:24:13.218 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:24:13.218 slat (nsec): min=11433, max=54303, avg=19072.33, stdev=6181.80 00:24:13.218 clat (usec): min=227, max=404, avg=287.43, stdev=24.07 00:24:13.218 lat (usec): min=249, max=423, avg=306.51, stdev=25.56 00:24:13.218 clat percentiles (usec): 00:24:13.218 | 1.00th=[ 245], 5.00th=[ 255], 10.00th=[ 262], 20.00th=[ 269], 00:24:13.218 | 30.00th=[ 273], 40.00th=[ 281], 50.00th=[ 285], 60.00th=[ 289], 00:24:13.218 | 70.00th=[ 297], 80.00th=[ 306], 90.00th=[ 322], 95.00th=[ 334], 00:24:13.218 | 99.00th=[ 355], 99.50th=[ 363], 99.90th=[ 388], 99.95th=[ 404], 00:24:13.218 | 99.99th=[ 404] 00:24:13.218 write: IOPS=1673, BW=6693KiB/s (6854kB/s)(6700KiB/1001msec); 0 zone resets 00:24:13.218 slat (nsec): min=12468, max=92737, avg=32580.99, stdev=11189.36 00:24:13.218 clat (usec): min=131, max=7810, avg=278.90, stdev=291.04 00:24:13.218 lat (usec): min=152, max=7858, avg=311.48, stdev=293.39 00:24:13.218 clat percentiles (usec): 00:24:13.218 | 1.00th=[ 155], 5.00th=[ 186], 10.00th=[ 196], 20.00th=[ 208], 00:24:13.218 | 30.00th=[ 217], 40.00th=[ 225], 50.00th=[ 233], 60.00th=[ 245], 00:24:13.218 | 70.00th=[ 289], 80.00th=[ 351], 90.00th=[ 371], 95.00th=[ 383], 00:24:13.218 | 99.00th=[ 519], 99.50th=[ 979], 99.90th=[ 7177], 99.95th=[ 7832], 00:24:13.218 | 99.99th=[ 7832] 00:24:13.218 bw ( KiB/s): min= 7464, max= 7464, per=23.21%, avg=7464.00, stdev= 0.00, samples=1 00:24:13.218 iops : min= 1866, max= 1866, avg=1866.00, stdev= 0.00, samples=1 00:24:13.218 lat (usec) : 250=33.82%, 500=65.59%, 750=0.19%, 1000=0.22% 00:24:13.218 lat (msec) : 2=0.06%, 4=0.03%, 10=0.09% 00:24:13.218 cpu : usr=1.90%, sys=6.10%, ctx=3218, majf=0, minf=15 00:24:13.218 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:13.218 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:13.218 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:13.218 issued rwts: total=1536,1675,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:13.218 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:13.218 00:24:13.218 Run status group 0 (all jobs): 00:24:13.218 READ: bw=29.1MiB/s (30.6MB/s), 6138KiB/s-9375KiB/s (6285kB/s-9600kB/s), io=29.2MiB (30.6MB), run=1001-1001msec 00:24:13.218 WRITE: bw=31.4MiB/s (32.9MB/s), 6693KiB/s-9.99MiB/s (6854kB/s-10.5MB/s), io=31.4MiB (33.0MB), run=1001-1001msec 00:24:13.218 00:24:13.218 Disk stats (read/write): 00:24:13.218 nvme0n1: ios=1287/1536, merge=0/0, ticks=457/428, in_queue=885, util=89.68% 00:24:13.218 nvme0n2: ios=2034/2048, merge=0/0, ticks=446/349, in_queue=795, util=89.07% 00:24:13.218 nvme0n3: ios=1566/2040, merge=0/0, ticks=446/405, in_queue=851, util=90.00% 00:24:13.218 nvme0n4: ios=1209/1536, merge=0/0, ticks=440/427, in_queue=867, util=88.98% 00:24:13.219 15:42:43 -- target/fio.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:24:13.219 [global] 00:24:13.219 thread=1 00:24:13.219 invalidate=1 00:24:13.219 rw=randwrite 00:24:13.219 time_based=1 00:24:13.219 runtime=1 00:24:13.219 ioengine=libaio 00:24:13.219 direct=1 00:24:13.219 bs=4096 00:24:13.219 iodepth=1 00:24:13.219 norandommap=0 00:24:13.219 numjobs=1 00:24:13.219 00:24:13.219 verify_dump=1 00:24:13.219 verify_backlog=512 00:24:13.219 verify_state_save=0 00:24:13.219 do_verify=1 00:24:13.219 verify=crc32c-intel 00:24:13.219 [job0] 00:24:13.219 filename=/dev/nvme0n1 00:24:13.219 [job1] 00:24:13.219 filename=/dev/nvme0n2 00:24:13.219 [job2] 00:24:13.219 filename=/dev/nvme0n3 00:24:13.219 [job3] 00:24:13.219 filename=/dev/nvme0n4 00:24:13.219 Could not set queue depth (nvme0n1) 00:24:13.219 Could not set queue depth (nvme0n2) 00:24:13.219 Could not set queue depth (nvme0n3) 00:24:13.219 Could not set queue depth (nvme0n4) 00:24:13.219 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:13.219 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:13.219 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:13.219 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:13.219 fio-3.35 00:24:13.219 Starting 4 threads 00:24:14.594 00:24:14.594 job0: (groupid=0, jobs=1): err= 0: pid=75953: Fri Apr 26 15:42:44 2024 00:24:14.594 read: IOPS=2825, BW=11.0MiB/s (11.6MB/s)(11.0MiB/1001msec) 00:24:14.594 slat (nsec): min=13969, max=61669, avg=18930.38, stdev=5523.57 00:24:14.594 clat (usec): min=135, max=1908, avg=162.98, stdev=39.37 00:24:14.594 lat (usec): min=153, max=1927, avg=181.91, stdev=40.12 00:24:14.594 clat percentiles (usec): 00:24:14.594 | 1.00th=[ 143], 5.00th=[ 147], 10.00th=[ 149], 20.00th=[ 153], 00:24:14.594 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 163], 00:24:14.594 | 70.00th=[ 165], 80.00th=[ 169], 90.00th=[ 176], 95.00th=[ 180], 00:24:14.594 | 99.00th=[ 194], 99.50th=[ 208], 99.90th=[ 644], 99.95th=[ 783], 00:24:14.594 | 99.99th=[ 1909] 00:24:14.594 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:24:14.594 slat (usec): min=20, max=117, avg=30.04, stdev=10.84 00:24:14.594 clat (usec): min=101, max=567, avg=124.03, stdev=15.57 00:24:14.594 lat (usec): min=124, max=594, avg=154.06, stdev=21.53 00:24:14.594 clat percentiles (usec): 00:24:14.594 | 1.00th=[ 105], 5.00th=[ 109], 10.00th=[ 112], 20.00th=[ 115], 00:24:14.594 | 30.00th=[ 118], 40.00th=[ 120], 50.00th=[ 123], 60.00th=[ 125], 00:24:14.594 | 70.00th=[ 128], 80.00th=[ 133], 90.00th=[ 139], 95.00th=[ 145], 00:24:14.594 | 99.00th=[ 163], 99.50th=[ 174], 99.90th=[ 265], 99.95th=[ 392], 00:24:14.594 | 99.99th=[ 570] 00:24:14.594 bw ( KiB/s): min=12288, max=12288, per=30.66%, avg=12288.00, stdev= 0.00, samples=1 00:24:14.594 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:24:14.594 lat (usec) : 250=99.73%, 500=0.19%, 750=0.05%, 1000=0.02% 00:24:14.594 lat (msec) : 2=0.02% 00:24:14.594 cpu : usr=2.90%, sys=10.50%, ctx=5900, majf=0, minf=5 00:24:14.594 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:14.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.594 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.594 issued rwts: total=2828,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:14.594 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:14.594 job1: (groupid=0, jobs=1): err= 0: pid=75954: Fri Apr 26 15:42:44 2024 00:24:14.594 read: IOPS=1540, BW=6162KiB/s (6310kB/s)(6168KiB/1001msec) 00:24:14.594 slat (nsec): min=11900, max=47287, avg=14547.16, stdev=2498.32 00:24:14.594 clat (usec): min=177, max=4298, avg=296.65, stdev=103.85 00:24:14.594 lat (usec): min=190, max=4311, avg=311.20, stdev=103.83 00:24:14.594 clat percentiles (usec): 00:24:14.594 | 1.00th=[ 260], 5.00th=[ 273], 10.00th=[ 277], 20.00th=[ 281], 00:24:14.594 | 30.00th=[ 285], 40.00th=[ 289], 50.00th=[ 293], 60.00th=[ 297], 00:24:14.594 | 70.00th=[ 302], 80.00th=[ 306], 90.00th=[ 314], 95.00th=[ 322], 00:24:14.594 | 99.00th=[ 379], 99.50th=[ 400], 99.90th=[ 420], 99.95th=[ 4293], 00:24:14.594 | 99.99th=[ 4293] 00:24:14.594 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:24:14.594 slat (usec): min=15, max=110, avg=24.56, stdev= 6.62 00:24:14.594 clat (usec): min=125, max=1605, avg=226.31, stdev=35.66 00:24:14.594 lat (usec): min=156, max=1628, avg=250.87, stdev=35.16 00:24:14.594 clat percentiles (usec): 00:24:14.594 | 1.00th=[ 182], 5.00th=[ 202], 10.00th=[ 208], 20.00th=[ 215], 00:24:14.594 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 225], 60.00th=[ 229], 00:24:14.594 | 70.00th=[ 233], 80.00th=[ 239], 90.00th=[ 245], 95.00th=[ 253], 00:24:14.594 | 99.00th=[ 269], 99.50th=[ 277], 99.90th=[ 400], 99.95th=[ 424], 00:24:14.594 | 99.99th=[ 1614] 00:24:14.594 bw ( KiB/s): min= 8192, max= 8192, per=20.44%, avg=8192.00, stdev= 0.00, samples=1 00:24:14.594 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:24:14.594 lat (usec) : 250=53.90%, 500=46.04% 00:24:14.594 lat (msec) : 2=0.03%, 10=0.03% 00:24:14.594 cpu : usr=1.20%, sys=5.70%, ctx=3590, majf=0, minf=17 00:24:14.594 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:14.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.594 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.594 issued rwts: total=1542,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:14.594 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:14.594 job2: (groupid=0, jobs=1): err= 0: pid=75955: Fri Apr 26 15:42:44 2024 00:24:14.594 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:24:14.594 slat (nsec): min=13132, max=65512, avg=17715.23, stdev=5676.23 00:24:14.594 clat (usec): min=144, max=1785, avg=180.61, stdev=45.89 00:24:14.594 lat (usec): min=160, max=1805, avg=198.33, stdev=47.97 00:24:14.594 clat percentiles (usec): 00:24:14.594 | 1.00th=[ 155], 5.00th=[ 159], 10.00th=[ 161], 20.00th=[ 165], 00:24:14.594 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 174], 60.00th=[ 176], 00:24:14.594 | 70.00th=[ 180], 80.00th=[ 186], 90.00th=[ 198], 95.00th=[ 235], 00:24:14.594 | 99.00th=[ 262], 99.50th=[ 302], 99.90th=[ 725], 99.95th=[ 889], 00:24:14.594 | 99.99th=[ 1778] 00:24:14.594 write: IOPS=2859, BW=11.2MiB/s (11.7MB/s)(11.2MiB/1001msec); 0 zone resets 00:24:14.594 slat (usec): min=19, max=109, avg=26.47, stdev= 8.20 00:24:14.594 clat (usec): min=99, max=1736, avg=141.70, stdev=39.59 00:24:14.594 lat (usec): min=127, max=1759, avg=168.16, stdev=43.74 00:24:14.594 clat percentiles (usec): 00:24:14.594 | 1.00th=[ 116], 5.00th=[ 119], 10.00th=[ 121], 20.00th=[ 125], 00:24:14.594 | 30.00th=[ 127], 40.00th=[ 130], 50.00th=[ 133], 60.00th=[ 137], 00:24:14.594 | 70.00th=[ 141], 80.00th=[ 153], 90.00th=[ 182], 95.00th=[ 190], 00:24:14.594 | 99.00th=[ 206], 99.50th=[ 221], 99.90th=[ 461], 99.95th=[ 586], 00:24:14.594 | 99.99th=[ 1729] 00:24:14.594 bw ( KiB/s): min=12288, max=12288, per=30.66%, avg=12288.00, stdev= 0.00, samples=1 00:24:14.594 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:24:14.594 lat (usec) : 100=0.02%, 250=98.99%, 500=0.83%, 750=0.11%, 1000=0.02% 00:24:14.594 lat (msec) : 2=0.04% 00:24:14.594 cpu : usr=1.60%, sys=9.60%, ctx=5422, majf=0, minf=7 00:24:14.594 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:14.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.594 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.594 issued rwts: total=2560,2862,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:14.594 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:14.594 job3: (groupid=0, jobs=1): err= 0: pid=75956: Fri Apr 26 15:42:44 2024 00:24:14.594 read: IOPS=1540, BW=6162KiB/s (6310kB/s)(6168KiB/1001msec) 00:24:14.594 slat (nsec): min=11797, max=42566, avg=14847.34, stdev=2681.92 00:24:14.594 clat (usec): min=190, max=4289, avg=296.34, stdev=103.48 00:24:14.594 lat (usec): min=204, max=4303, avg=311.19, stdev=103.51 00:24:14.595 clat percentiles (usec): 00:24:14.595 | 1.00th=[ 262], 5.00th=[ 273], 10.00th=[ 277], 20.00th=[ 281], 00:24:14.595 | 30.00th=[ 285], 40.00th=[ 289], 50.00th=[ 293], 60.00th=[ 297], 00:24:14.595 | 70.00th=[ 302], 80.00th=[ 306], 90.00th=[ 314], 95.00th=[ 318], 00:24:14.595 | 99.00th=[ 367], 99.50th=[ 392], 99.90th=[ 433], 99.95th=[ 4293], 00:24:14.595 | 99.99th=[ 4293] 00:24:14.595 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:24:14.595 slat (usec): min=12, max=129, avg=24.48, stdev= 6.57 00:24:14.595 clat (usec): min=109, max=1656, avg=226.38, stdev=36.03 00:24:14.595 lat (usec): min=152, max=1679, avg=250.85, stdev=35.50 00:24:14.595 clat percentiles (usec): 00:24:14.595 | 1.00th=[ 188], 5.00th=[ 202], 10.00th=[ 208], 20.00th=[ 215], 00:24:14.595 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 225], 60.00th=[ 229], 00:24:14.595 | 70.00th=[ 233], 80.00th=[ 237], 90.00th=[ 247], 95.00th=[ 251], 00:24:14.595 | 99.00th=[ 269], 99.50th=[ 281], 99.90th=[ 371], 99.95th=[ 420], 00:24:14.595 | 99.99th=[ 1663] 00:24:14.595 bw ( KiB/s): min= 8192, max= 8192, per=20.44%, avg=8192.00, stdev= 0.00, samples=1 00:24:14.595 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:24:14.595 lat (usec) : 250=54.01%, 500=45.93% 00:24:14.595 lat (msec) : 2=0.03%, 10=0.03% 00:24:14.595 cpu : usr=1.30%, sys=5.70%, ctx=3594, majf=0, minf=16 00:24:14.595 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:14.595 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.595 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:14.595 issued rwts: total=1542,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:14.595 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:14.595 00:24:14.595 Run status group 0 (all jobs): 00:24:14.595 READ: bw=33.1MiB/s (34.7MB/s), 6162KiB/s-11.0MiB/s (6310kB/s-11.6MB/s), io=33.1MiB (34.7MB), run=1001-1001msec 00:24:14.595 WRITE: bw=39.1MiB/s (41.0MB/s), 8184KiB/s-12.0MiB/s (8380kB/s-12.6MB/s), io=39.2MiB (41.1MB), run=1001-1001msec 00:24:14.595 00:24:14.595 Disk stats (read/write): 00:24:14.595 nvme0n1: ios=2590/2560, merge=0/0, ticks=457/350, in_queue=807, util=89.48% 00:24:14.595 nvme0n2: ios=1586/1559, merge=0/0, ticks=483/362, in_queue=845, util=89.81% 00:24:14.595 nvme0n3: ios=2194/2560, merge=0/0, ticks=447/388, in_queue=835, util=90.48% 00:24:14.595 nvme0n4: ios=1569/1558, merge=0/0, ticks=498/374, in_queue=872, util=90.84% 00:24:14.595 15:42:44 -- target/fio.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:24:14.595 [global] 00:24:14.595 thread=1 00:24:14.595 invalidate=1 00:24:14.595 rw=write 00:24:14.595 time_based=1 00:24:14.595 runtime=1 00:24:14.595 ioengine=libaio 00:24:14.595 direct=1 00:24:14.595 bs=4096 00:24:14.595 iodepth=128 00:24:14.595 norandommap=0 00:24:14.595 numjobs=1 00:24:14.595 00:24:14.595 verify_dump=1 00:24:14.595 verify_backlog=512 00:24:14.595 verify_state_save=0 00:24:14.595 do_verify=1 00:24:14.595 verify=crc32c-intel 00:24:14.595 [job0] 00:24:14.595 filename=/dev/nvme0n1 00:24:14.595 [job1] 00:24:14.595 filename=/dev/nvme0n2 00:24:14.595 [job2] 00:24:14.595 filename=/dev/nvme0n3 00:24:14.595 [job3] 00:24:14.595 filename=/dev/nvme0n4 00:24:14.595 Could not set queue depth (nvme0n1) 00:24:14.595 Could not set queue depth (nvme0n2) 00:24:14.595 Could not set queue depth (nvme0n3) 00:24:14.595 Could not set queue depth (nvme0n4) 00:24:14.595 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:24:14.595 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:24:14.595 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:24:14.595 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:24:14.595 fio-3.35 00:24:14.595 Starting 4 threads 00:24:15.588 00:24:15.588 job0: (groupid=0, jobs=1): err= 0: pid=76012: Fri Apr 26 15:42:45 2024 00:24:15.588 read: IOPS=4690, BW=18.3MiB/s (19.2MB/s)(18.4MiB/1003msec) 00:24:15.588 slat (usec): min=6, max=3303, avg=99.86, stdev=455.65 00:24:15.588 clat (usec): min=357, max=15759, avg=13059.71, stdev=1318.05 00:24:15.588 lat (usec): min=2993, max=17193, avg=13159.56, stdev=1255.16 00:24:15.588 clat percentiles (usec): 00:24:15.588 | 1.00th=[ 6783], 5.00th=[10814], 10.00th=[11600], 20.00th=[12780], 00:24:15.588 | 30.00th=[13042], 40.00th=[13173], 50.00th=[13304], 60.00th=[13435], 00:24:15.588 | 70.00th=[13566], 80.00th=[13698], 90.00th=[14091], 95.00th=[14353], 00:24:15.588 | 99.00th=[15008], 99.50th=[15139], 99.90th=[15664], 99.95th=[15664], 00:24:15.588 | 99.99th=[15795] 00:24:15.588 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:24:15.588 slat (usec): min=11, max=3165, avg=95.63, stdev=379.37 00:24:15.588 clat (usec): min=9865, max=16605, avg=12751.75, stdev=1297.46 00:24:15.588 lat (usec): min=10087, max=16684, avg=12847.39, stdev=1292.10 00:24:15.588 clat percentiles (usec): 00:24:15.588 | 1.00th=[10552], 5.00th=[10814], 10.00th=[11076], 20.00th=[11338], 00:24:15.588 | 30.00th=[11731], 40.00th=[12518], 50.00th=[12911], 60.00th=[13304], 00:24:15.588 | 70.00th=[13566], 80.00th=[13829], 90.00th=[14222], 95.00th=[14746], 00:24:15.588 | 99.00th=[15795], 99.50th=[16188], 99.90th=[16581], 99.95th=[16581], 00:24:15.588 | 99.99th=[16581] 00:24:15.588 bw ( KiB/s): min=20232, max=20480, per=26.09%, avg=20356.00, stdev=175.36, samples=2 00:24:15.588 iops : min= 5058, max= 5120, avg=5089.00, stdev=43.84, samples=2 00:24:15.588 lat (usec) : 500=0.01% 00:24:15.588 lat (msec) : 4=0.33%, 10=0.40%, 20=99.27% 00:24:15.588 cpu : usr=4.69%, sys=14.67%, ctx=546, majf=0, minf=7 00:24:15.588 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:24:15.588 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:15.588 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:15.588 issued rwts: total=4705,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:15.588 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:15.588 job1: (groupid=0, jobs=1): err= 0: pid=76013: Fri Apr 26 15:42:45 2024 00:24:15.588 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:24:15.588 slat (usec): min=6, max=3745, avg=95.52, stdev=492.71 00:24:15.588 clat (usec): min=9205, max=16406, avg=12615.70, stdev=1005.42 00:24:15.588 lat (usec): min=9223, max=16503, avg=12711.21, stdev=1053.49 00:24:15.588 clat percentiles (usec): 00:24:15.588 | 1.00th=[ 9765], 5.00th=[10552], 10.00th=[11207], 20.00th=[12256], 00:24:15.588 | 30.00th=[12387], 40.00th=[12518], 50.00th=[12518], 60.00th=[12649], 00:24:15.588 | 70.00th=[12911], 80.00th=[13304], 90.00th=[13698], 95.00th=[14222], 00:24:15.588 | 99.00th=[15533], 99.50th=[15795], 99.90th=[16188], 99.95th=[16319], 00:24:15.588 | 99.99th=[16450] 00:24:15.588 write: IOPS=5214, BW=20.4MiB/s (21.4MB/s)(20.4MiB/1003msec); 0 zone resets 00:24:15.588 slat (usec): min=11, max=3580, avg=90.25, stdev=438.49 00:24:15.588 clat (usec): min=196, max=15890, avg=11878.64, stdev=1474.73 00:24:15.588 lat (usec): min=2833, max=15940, avg=11968.89, stdev=1452.08 00:24:15.588 clat percentiles (usec): 00:24:15.588 | 1.00th=[ 7373], 5.00th=[ 9372], 10.00th=[ 9765], 20.00th=[11207], 00:24:15.588 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12256], 60.00th=[12387], 00:24:15.588 | 70.00th=[12649], 80.00th=[12911], 90.00th=[13173], 95.00th=[13304], 00:24:15.588 | 99.00th=[14222], 99.50th=[14615], 99.90th=[15139], 99.95th=[15533], 00:24:15.588 | 99.99th=[15926] 00:24:15.588 bw ( KiB/s): min=20480, max=20480, per=26.25%, avg=20480.00, stdev= 0.00, samples=2 00:24:15.588 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:24:15.588 lat (usec) : 250=0.01% 00:24:15.588 lat (msec) : 4=0.39%, 10=7.11%, 20=92.49% 00:24:15.588 cpu : usr=4.09%, sys=14.97%, ctx=361, majf=0, minf=11 00:24:15.589 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:24:15.589 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:15.589 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:15.589 issued rwts: total=5120,5230,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:15.589 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:15.589 job2: (groupid=0, jobs=1): err= 0: pid=76014: Fri Apr 26 15:42:45 2024 00:24:15.589 read: IOPS=4372, BW=17.1MiB/s (17.9MB/s)(17.1MiB/1003msec) 00:24:15.589 slat (usec): min=5, max=4260, avg=108.23, stdev=472.76 00:24:15.589 clat (usec): min=847, max=18478, avg=14185.92, stdev=1729.64 00:24:15.589 lat (usec): min=2712, max=18820, avg=14294.14, stdev=1735.55 00:24:15.589 clat percentiles (usec): 00:24:15.589 | 1.00th=[ 7177], 5.00th=[11600], 10.00th=[12387], 20.00th=[13173], 00:24:15.589 | 30.00th=[13698], 40.00th=[14091], 50.00th=[14353], 60.00th=[14484], 00:24:15.589 | 70.00th=[14877], 80.00th=[15270], 90.00th=[16057], 95.00th=[16581], 00:24:15.589 | 99.00th=[17433], 99.50th=[17695], 99.90th=[17957], 99.95th=[18220], 00:24:15.589 | 99.99th=[18482] 00:24:15.589 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:24:15.589 slat (usec): min=12, max=4177, avg=105.97, stdev=458.79 00:24:15.589 clat (usec): min=9788, max=18223, avg=13977.39, stdev=1395.03 00:24:15.589 lat (usec): min=10231, max=18250, avg=14083.35, stdev=1387.03 00:24:15.589 clat percentiles (usec): 00:24:15.589 | 1.00th=[10683], 5.00th=[11076], 10.00th=[11338], 20.00th=[13435], 00:24:15.589 | 30.00th=[13698], 40.00th=[14091], 50.00th=[14353], 60.00th=[14484], 00:24:15.589 | 70.00th=[14615], 80.00th=[14746], 90.00th=[15139], 95.00th=[16057], 00:24:15.589 | 99.00th=[17433], 99.50th=[17695], 99.90th=[18220], 99.95th=[18220], 00:24:15.589 | 99.99th=[18220] 00:24:15.589 bw ( KiB/s): min=17816, max=19048, per=23.62%, avg=18432.00, stdev=871.16, samples=2 00:24:15.589 iops : min= 4454, max= 4762, avg=4608.00, stdev=217.79, samples=2 00:24:15.589 lat (usec) : 1000=0.01% 00:24:15.589 lat (msec) : 4=0.20%, 10=0.67%, 20=99.12% 00:24:15.589 cpu : usr=4.09%, sys=14.67%, ctx=477, majf=0, minf=15 00:24:15.589 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:24:15.589 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:15.589 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:15.589 issued rwts: total=4386,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:15.589 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:15.589 job3: (groupid=0, jobs=1): err= 0: pid=76015: Fri Apr 26 15:42:45 2024 00:24:15.589 read: IOPS=4196, BW=16.4MiB/s (17.2MB/s)(16.4MiB/1001msec) 00:24:15.589 slat (usec): min=8, max=5984, avg=113.23, stdev=509.46 00:24:15.589 clat (usec): min=402, max=19933, avg=14687.31, stdev=1789.95 00:24:15.589 lat (usec): min=3628, max=20630, avg=14800.54, stdev=1743.95 00:24:15.589 clat percentiles (usec): 00:24:15.589 | 1.00th=[ 7439], 5.00th=[12125], 10.00th=[13173], 20.00th=[14091], 00:24:15.589 | 30.00th=[14484], 40.00th=[14615], 50.00th=[14746], 60.00th=[14877], 00:24:15.589 | 70.00th=[15139], 80.00th=[15401], 90.00th=[16319], 95.00th=[17433], 00:24:15.589 | 99.00th=[19530], 99.50th=[19792], 99.90th=[20055], 99.95th=[20055], 00:24:15.589 | 99.99th=[20055] 00:24:15.589 write: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec); 0 zone resets 00:24:15.589 slat (usec): min=13, max=3742, avg=105.68, stdev=415.55 00:24:15.589 clat (usec): min=11263, max=17178, avg=14043.22, stdev=1356.99 00:24:15.589 lat (usec): min=11524, max=17228, avg=14148.91, stdev=1357.09 00:24:15.589 clat percentiles (usec): 00:24:15.589 | 1.00th=[11731], 5.00th=[12125], 10.00th=[12256], 20.00th=[12649], 00:24:15.589 | 30.00th=[12911], 40.00th=[13566], 50.00th=[14222], 60.00th=[14484], 00:24:15.589 | 70.00th=[15008], 80.00th=[15401], 90.00th=[15795], 95.00th=[16057], 00:24:15.589 | 99.00th=[16712], 99.50th=[16909], 99.90th=[17171], 99.95th=[17171], 00:24:15.589 | 99.99th=[17171] 00:24:15.589 bw ( KiB/s): min=18072, max=18616, per=23.51%, avg=18344.00, stdev=384.67, samples=2 00:24:15.589 iops : min= 4518, max= 4654, avg=4586.00, stdev=96.17, samples=2 00:24:15.589 lat (usec) : 500=0.01% 00:24:15.589 lat (msec) : 4=0.23%, 10=0.50%, 20=99.26% 00:24:15.589 cpu : usr=3.90%, sys=14.60%, ctx=522, majf=0, minf=17 00:24:15.589 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:24:15.589 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:15.589 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:15.589 issued rwts: total=4201,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:15.589 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:15.589 00:24:15.589 Run status group 0 (all jobs): 00:24:15.589 READ: bw=71.7MiB/s (75.2MB/s), 16.4MiB/s-19.9MiB/s (17.2MB/s-20.9MB/s), io=71.9MiB (75.4MB), run=1001-1003msec 00:24:15.589 WRITE: bw=76.2MiB/s (79.9MB/s), 17.9MiB/s-20.4MiB/s (18.8MB/s-21.4MB/s), io=76.4MiB (80.1MB), run=1001-1003msec 00:24:15.589 00:24:15.589 Disk stats (read/write): 00:24:15.589 nvme0n1: ios=4146/4446, merge=0/0, ticks=12611/12328, in_queue=24939, util=89.38% 00:24:15.589 nvme0n2: ios=4414/4608, merge=0/0, ticks=16328/15770, in_queue=32098, util=89.90% 00:24:15.589 nvme0n3: ios=3752/4096, merge=0/0, ticks=16717/16017, in_queue=32734, util=90.06% 00:24:15.589 nvme0n4: ios=3590/4064, merge=0/0, ticks=12374/12436, in_queue=24810, util=89.81% 00:24:15.589 15:42:45 -- target/fio.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:24:15.589 [global] 00:24:15.589 thread=1 00:24:15.589 invalidate=1 00:24:15.589 rw=randwrite 00:24:15.589 time_based=1 00:24:15.589 runtime=1 00:24:15.589 ioengine=libaio 00:24:15.589 direct=1 00:24:15.589 bs=4096 00:24:15.589 iodepth=128 00:24:15.589 norandommap=0 00:24:15.589 numjobs=1 00:24:15.589 00:24:15.589 verify_dump=1 00:24:15.589 verify_backlog=512 00:24:15.589 verify_state_save=0 00:24:15.589 do_verify=1 00:24:15.589 verify=crc32c-intel 00:24:15.589 [job0] 00:24:15.589 filename=/dev/nvme0n1 00:24:15.589 [job1] 00:24:15.589 filename=/dev/nvme0n2 00:24:15.589 [job2] 00:24:15.589 filename=/dev/nvme0n3 00:24:15.589 [job3] 00:24:15.589 filename=/dev/nvme0n4 00:24:15.850 Could not set queue depth (nvme0n1) 00:24:15.850 Could not set queue depth (nvme0n2) 00:24:15.850 Could not set queue depth (nvme0n3) 00:24:15.850 Could not set queue depth (nvme0n4) 00:24:15.850 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:24:15.850 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:24:15.850 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:24:15.850 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:24:15.850 fio-3.35 00:24:15.850 Starting 4 threads 00:24:17.224 00:24:17.224 job0: (groupid=0, jobs=1): err= 0: pid=76068: Fri Apr 26 15:42:47 2024 00:24:17.224 read: IOPS=4791, BW=18.7MiB/s (19.6MB/s)(18.9MiB/1008msec) 00:24:17.224 slat (usec): min=5, max=12066, avg=110.96, stdev=719.89 00:24:17.224 clat (usec): min=4956, max=25309, avg=13907.60, stdev=3720.90 00:24:17.224 lat (usec): min=5158, max=25323, avg=14018.56, stdev=3752.17 00:24:17.225 clat percentiles (usec): 00:24:17.225 | 1.00th=[ 5604], 5.00th=[ 9765], 10.00th=[10159], 20.00th=[10683], 00:24:17.225 | 30.00th=[12125], 40.00th=[12518], 50.00th=[12911], 60.00th=[13435], 00:24:17.225 | 70.00th=[15008], 80.00th=[16581], 90.00th=[19530], 95.00th=[21890], 00:24:17.225 | 99.00th=[23725], 99.50th=[23987], 99.90th=[25297], 99.95th=[25297], 00:24:17.225 | 99.99th=[25297] 00:24:17.225 write: IOPS=5079, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1008msec); 0 zone resets 00:24:17.225 slat (usec): min=4, max=10632, avg=83.19, stdev=328.72 00:24:17.225 clat (usec): min=3296, max=24451, avg=11814.22, stdev=2523.51 00:24:17.225 lat (usec): min=3319, max=24460, avg=11897.41, stdev=2545.83 00:24:17.225 clat percentiles (usec): 00:24:17.225 | 1.00th=[ 5080], 5.00th=[ 6063], 10.00th=[ 7242], 20.00th=[10159], 00:24:17.225 | 30.00th=[12256], 40.00th=[12649], 50.00th=[12911], 60.00th=[13042], 00:24:17.225 | 70.00th=[13173], 80.00th=[13304], 90.00th=[13435], 95.00th=[13566], 00:24:17.225 | 99.00th=[13829], 99.50th=[16909], 99.90th=[23987], 99.95th=[24249], 00:24:17.225 | 99.99th=[24511] 00:24:17.225 bw ( KiB/s): min=20480, max=20480, per=31.35%, avg=20480.00, stdev= 0.00, samples=2 00:24:17.225 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:24:17.225 lat (msec) : 4=0.06%, 10=12.91%, 20=82.19%, 50=4.83% 00:24:17.225 cpu : usr=6.06%, sys=10.72%, ctx=830, majf=0, minf=11 00:24:17.225 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:24:17.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:17.225 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:17.225 issued rwts: total=4830,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:17.225 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:17.225 job1: (groupid=0, jobs=1): err= 0: pid=76069: Fri Apr 26 15:42:47 2024 00:24:17.225 read: IOPS=5089, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1006msec) 00:24:17.225 slat (usec): min=6, max=5650, avg=96.41, stdev=464.88 00:24:17.225 clat (usec): min=7405, max=18530, avg=12387.17, stdev=1414.13 00:24:17.225 lat (usec): min=7690, max=18543, avg=12483.58, stdev=1455.84 00:24:17.225 clat percentiles (usec): 00:24:17.225 | 1.00th=[ 8586], 5.00th=[ 9765], 10.00th=[10814], 20.00th=[11731], 00:24:17.225 | 30.00th=[11994], 40.00th=[12125], 50.00th=[12387], 60.00th=[12518], 00:24:17.225 | 70.00th=[12780], 80.00th=[13173], 90.00th=[13960], 95.00th=[15008], 00:24:17.225 | 99.00th=[16319], 99.50th=[16909], 99.90th=[17171], 99.95th=[17433], 00:24:17.225 | 99.99th=[18482] 00:24:17.225 write: IOPS=5311, BW=20.7MiB/s (21.8MB/s)(20.9MiB/1006msec); 0 zone resets 00:24:17.225 slat (usec): min=11, max=4767, avg=86.83, stdev=377.98 00:24:17.225 clat (usec): min=5309, max=17738, avg=11951.02, stdev=1453.53 00:24:17.225 lat (usec): min=5783, max=17881, avg=12037.85, stdev=1488.39 00:24:17.225 clat percentiles (usec): 00:24:17.225 | 1.00th=[ 7504], 5.00th=[ 9372], 10.00th=[10552], 20.00th=[11207], 00:24:17.225 | 30.00th=[11469], 40.00th=[11731], 50.00th=[12125], 60.00th=[12256], 00:24:17.225 | 70.00th=[12518], 80.00th=[12518], 90.00th=[12911], 95.00th=[14615], 00:24:17.225 | 99.00th=[16909], 99.50th=[17171], 99.90th=[17695], 99.95th=[17695], 00:24:17.225 | 99.99th=[17695] 00:24:17.225 bw ( KiB/s): min=20728, max=21000, per=31.94%, avg=20864.00, stdev=192.33, samples=2 00:24:17.225 iops : min= 5182, max= 5250, avg=5216.00, stdev=48.08, samples=2 00:24:17.225 lat (msec) : 10=6.66%, 20=93.34% 00:24:17.225 cpu : usr=4.58%, sys=14.93%, ctx=608, majf=0, minf=13 00:24:17.225 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:24:17.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:17.225 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:17.225 issued rwts: total=5120,5343,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:17.225 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:17.225 job2: (groupid=0, jobs=1): err= 0: pid=76070: Fri Apr 26 15:42:47 2024 00:24:17.225 read: IOPS=3424, BW=13.4MiB/s (14.0MB/s)(16.1MiB/1205msec) 00:24:17.225 slat (usec): min=4, max=15414, avg=128.62, stdev=850.40 00:24:17.225 clat (msec): min=5, max=207, avg=17.32, stdev=16.73 00:24:17.225 lat (msec): min=5, max=207, avg=17.44, stdev=16.74 00:24:17.225 clat percentiles (msec): 00:24:17.225 | 1.00th=[ 7], 5.00th=[ 12], 10.00th=[ 12], 20.00th=[ 13], 00:24:17.225 | 30.00th=[ 14], 40.00th=[ 15], 50.00th=[ 15], 60.00th=[ 16], 00:24:17.225 | 70.00th=[ 18], 80.00th=[ 20], 90.00th=[ 24], 95.00th=[ 26], 00:24:17.225 | 99.00th=[ 32], 99.50th=[ 207], 99.90th=[ 209], 99.95th=[ 209], 00:24:17.225 | 99.99th=[ 209] 00:24:17.225 write: IOPS=3824, BW=14.9MiB/s (15.7MB/s)(18.0MiB/1205msec); 0 zone resets 00:24:17.225 slat (usec): min=6, max=11523, avg=95.11, stdev=401.13 00:24:17.225 clat (msec): min=2, max=218, avg=17.65, stdev=28.80 00:24:17.225 lat (msec): min=2, max=218, avg=17.74, stdev=28.80 00:24:17.225 clat percentiles (msec): 00:24:17.225 | 1.00th=[ 6], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 13], 00:24:17.225 | 30.00th=[ 15], 40.00th=[ 15], 50.00th=[ 15], 60.00th=[ 15], 00:24:17.225 | 70.00th=[ 15], 80.00th=[ 16], 90.00th=[ 16], 95.00th=[ 16], 00:24:17.225 | 99.00th=[ 213], 99.50th=[ 215], 99.90th=[ 220], 99.95th=[ 220], 00:24:17.225 | 99.99th=[ 220] 00:24:17.225 bw ( KiB/s): min=17520, max=18568, per=27.62%, avg=18044.00, stdev=741.05, samples=2 00:24:17.225 iops : min= 4380, max= 4642, avg=4511.00, stdev=185.26, samples=2 00:24:17.225 lat (msec) : 4=0.07%, 10=8.53%, 20=81.03%, 50=8.92%, 250=1.45% 00:24:17.225 cpu : usr=2.91%, sys=8.72%, ctx=646, majf=0, minf=15 00:24:17.225 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:24:17.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:17.225 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:17.225 issued rwts: total=4126,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:17.225 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:17.225 job3: (groupid=0, jobs=1): err= 0: pid=76071: Fri Apr 26 15:42:47 2024 00:24:17.225 read: IOPS=4080, BW=15.9MiB/s (16.7MB/s)(16.1MiB/1013msec) 00:24:17.225 slat (usec): min=3, max=13420, avg=129.24, stdev=850.77 00:24:17.225 clat (usec): min=4767, max=27940, avg=15730.34, stdev=3953.67 00:24:17.225 lat (usec): min=4780, max=27952, avg=15859.58, stdev=3997.23 00:24:17.225 clat percentiles (usec): 00:24:17.225 | 1.00th=[ 6390], 5.00th=[11338], 10.00th=[11994], 20.00th=[12780], 00:24:17.225 | 30.00th=[14091], 40.00th=[14222], 50.00th=[14484], 60.00th=[15008], 00:24:17.225 | 70.00th=[16581], 80.00th=[18744], 90.00th=[21890], 95.00th=[24511], 00:24:17.225 | 99.00th=[26870], 99.50th=[27132], 99.90th=[27919], 99.95th=[27919], 00:24:17.225 | 99.99th=[27919] 00:24:17.225 write: IOPS=4548, BW=17.8MiB/s (18.6MB/s)(18.0MiB/1013msec); 0 zone resets 00:24:17.225 slat (usec): min=4, max=11784, avg=94.21, stdev=377.33 00:24:17.225 clat (usec): min=3495, max=27912, avg=13717.78, stdev=3034.35 00:24:17.225 lat (usec): min=3517, max=27920, avg=13812.00, stdev=3059.77 00:24:17.225 clat percentiles (usec): 00:24:17.225 | 1.00th=[ 5145], 5.00th=[ 7177], 10.00th=[ 8586], 20.00th=[12125], 00:24:17.225 | 30.00th=[14091], 40.00th=[14484], 50.00th=[14746], 60.00th=[14877], 00:24:17.225 | 70.00th=[15008], 80.00th=[15270], 90.00th=[15401], 95.00th=[15664], 00:24:17.225 | 99.00th=[22938], 99.50th=[25560], 99.90th=[27395], 99.95th=[27395], 00:24:17.225 | 99.99th=[27919] 00:24:17.225 bw ( KiB/s): min=17992, max=18160, per=27.67%, avg=18076.00, stdev=118.79, samples=2 00:24:17.225 iops : min= 4498, max= 4540, avg=4519.00, stdev=29.70, samples=2 00:24:17.225 lat (msec) : 4=0.16%, 10=8.40%, 20=84.01%, 50=7.44% 00:24:17.225 cpu : usr=4.25%, sys=9.19%, ctx=643, majf=0, minf=11 00:24:17.225 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:24:17.225 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:17.225 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:24:17.225 issued rwts: total=4134,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:17.225 latency : target=0, window=0, percentile=100.00%, depth=128 00:24:17.225 00:24:17.225 Run status group 0 (all jobs): 00:24:17.225 READ: bw=59.0MiB/s (61.9MB/s), 13.4MiB/s-19.9MiB/s (14.0MB/s-20.8MB/s), io=71.1MiB (74.6MB), run=1006-1205msec 00:24:17.225 WRITE: bw=63.8MiB/s (66.9MB/s), 14.9MiB/s-20.7MiB/s (15.7MB/s-21.8MB/s), io=76.9MiB (80.6MB), run=1006-1205msec 00:24:17.225 00:24:17.225 Disk stats (read/write): 00:24:17.225 nvme0n1: ios=4146/4527, merge=0/0, ticks=52913/51945, in_queue=104858, util=89.37% 00:24:17.225 nvme0n2: ios=4459/4608, merge=0/0, ticks=26158/24305, in_queue=50463, util=89.90% 00:24:17.225 nvme0n3: ios=4158/4608, merge=0/0, ticks=62476/60553, in_queue=123029, util=92.58% 00:24:17.225 nvme0n4: ios=3601/3943, merge=0/0, ticks=53501/51771, in_queue=105272, util=90.02% 00:24:17.225 15:42:47 -- target/fio.sh@55 -- # sync 00:24:17.225 15:42:47 -- target/fio.sh@59 -- # fio_pid=76090 00:24:17.225 15:42:47 -- target/fio.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:24:17.225 15:42:47 -- target/fio.sh@61 -- # sleep 3 00:24:17.225 [global] 00:24:17.225 thread=1 00:24:17.225 invalidate=1 00:24:17.225 rw=read 00:24:17.225 time_based=1 00:24:17.225 runtime=10 00:24:17.225 ioengine=libaio 00:24:17.225 direct=1 00:24:17.225 bs=4096 00:24:17.225 iodepth=1 00:24:17.225 norandommap=1 00:24:17.225 numjobs=1 00:24:17.225 00:24:17.225 [job0] 00:24:17.225 filename=/dev/nvme0n1 00:24:17.225 [job1] 00:24:17.225 filename=/dev/nvme0n2 00:24:17.225 [job2] 00:24:17.225 filename=/dev/nvme0n3 00:24:17.225 [job3] 00:24:17.225 filename=/dev/nvme0n4 00:24:17.484 Could not set queue depth (nvme0n1) 00:24:17.484 Could not set queue depth (nvme0n2) 00:24:17.484 Could not set queue depth (nvme0n3) 00:24:17.484 Could not set queue depth (nvme0n4) 00:24:17.484 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:17.484 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:17.484 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:17.484 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:24:17.484 fio-3.35 00:24:17.484 Starting 4 threads 00:24:20.766 15:42:50 -- target/fio.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete concat0 00:24:20.766 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=27865088, buflen=4096 00:24:20.766 fio: pid=76133, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:24:20.766 15:42:50 -- target/fio.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_raid_delete raid0 00:24:20.766 fio: pid=76132, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:24:20.766 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=32325632, buflen=4096 00:24:20.766 15:42:51 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:24:20.766 15:42:51 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:24:21.332 fio: pid=76130, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:24:21.332 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=38875136, buflen=4096 00:24:21.332 15:42:51 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:24:21.332 15:42:51 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:24:21.591 fio: pid=76131, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:24:21.591 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=16195584, buflen=4096 00:24:21.591 00:24:21.591 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=76130: Fri Apr 26 15:42:51 2024 00:24:21.591 read: IOPS=2670, BW=10.4MiB/s (10.9MB/s)(37.1MiB/3554msec) 00:24:21.591 slat (usec): min=8, max=16918, avg=25.13, stdev=268.31 00:24:21.591 clat (usec): min=120, max=3364, avg=347.39, stdev=157.59 00:24:21.591 lat (usec): min=149, max=17098, avg=372.53, stdev=310.16 00:24:21.591 clat percentiles (usec): 00:24:21.591 | 1.00th=[ 143], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 253], 00:24:21.591 | 30.00th=[ 285], 40.00th=[ 343], 50.00th=[ 355], 60.00th=[ 363], 00:24:21.591 | 70.00th=[ 371], 80.00th=[ 383], 90.00th=[ 494], 95.00th=[ 676], 00:24:21.591 | 99.00th=[ 816], 99.50th=[ 873], 99.90th=[ 1254], 99.95th=[ 1876], 00:24:21.591 | 99.99th=[ 3359] 00:24:21.591 bw ( KiB/s): min= 5664, max=11472, per=20.47%, avg=9441.33, stdev=1992.91, samples=6 00:24:21.591 iops : min= 1416, max= 2868, avg=2360.33, stdev=498.23, samples=6 00:24:21.591 lat (usec) : 250=19.43%, 500=70.63%, 750=7.03%, 1000=2.63% 00:24:21.591 lat (msec) : 2=0.23%, 4=0.04% 00:24:21.591 cpu : usr=1.35%, sys=4.36%, ctx=9503, majf=0, minf=1 00:24:21.591 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.591 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.591 issued rwts: total=9492,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.591 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:21.591 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=76131: Fri Apr 26 15:42:51 2024 00:24:21.591 read: IOPS=5266, BW=20.6MiB/s (21.6MB/s)(79.4MiB/3862msec) 00:24:21.591 slat (usec): min=12, max=12395, avg=18.27, stdev=154.10 00:24:21.591 clat (usec): min=3, max=2396, avg=170.11, stdev=51.00 00:24:21.591 lat (usec): min=146, max=12667, avg=188.38, stdev=163.76 00:24:21.591 clat percentiles (usec): 00:24:21.591 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 147], 20.00th=[ 151], 00:24:21.591 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 163], 00:24:21.591 | 70.00th=[ 167], 80.00th=[ 176], 90.00th=[ 204], 95.00th=[ 225], 00:24:21.591 | 99.00th=[ 371], 99.50th=[ 392], 99.90th=[ 502], 99.95th=[ 889], 00:24:21.591 | 99.99th=[ 1975] 00:24:21.591 bw ( KiB/s): min=16792, max=23016, per=45.29%, avg=20886.71, stdev=2157.82, samples=7 00:24:21.591 iops : min= 4198, max= 5754, avg=5221.57, stdev=539.44, samples=7 00:24:21.591 lat (usec) : 4=0.01%, 50=0.01%, 250=96.99%, 500=2.88%, 750=0.04% 00:24:21.591 lat (usec) : 1000=0.01% 00:24:21.591 lat (msec) : 2=0.03%, 4=0.01% 00:24:21.591 cpu : usr=1.66%, sys=6.97%, ctx=20362, majf=0, minf=1 00:24:21.591 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.591 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.591 issued rwts: total=20339,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.591 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:21.591 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=76132: Fri Apr 26 15:42:51 2024 00:24:21.591 read: IOPS=2430, BW=9722KiB/s (9956kB/s)(30.8MiB/3247msec) 00:24:21.591 slat (usec): min=12, max=9379, avg=28.21, stdev=137.56 00:24:21.591 clat (usec): min=144, max=2131, avg=380.63, stdev=125.59 00:24:21.591 lat (usec): min=162, max=9571, avg=408.84, stdev=188.47 00:24:21.591 clat percentiles (usec): 00:24:21.591 | 1.00th=[ 169], 5.00th=[ 265], 10.00th=[ 273], 20.00th=[ 302], 00:24:21.591 | 30.00th=[ 343], 40.00th=[ 351], 50.00th=[ 359], 60.00th=[ 363], 00:24:21.591 | 70.00th=[ 375], 80.00th=[ 392], 90.00th=[ 570], 95.00th=[ 676], 00:24:21.591 | 99.00th=[ 799], 99.50th=[ 848], 99.90th=[ 1254], 99.95th=[ 1352], 00:24:21.591 | 99.99th=[ 2147] 00:24:21.591 bw ( KiB/s): min= 5752, max=11336, per=20.67%, avg=9533.33, stdev=1991.03, samples=6 00:24:21.591 iops : min= 1438, max= 2834, avg=2383.33, stdev=497.76, samples=6 00:24:21.591 lat (usec) : 250=2.66%, 500=85.43%, 750=9.13%, 1000=2.55% 00:24:21.591 lat (msec) : 2=0.20%, 4=0.01% 00:24:21.591 cpu : usr=1.20%, sys=5.11%, ctx=7918, majf=0, minf=1 00:24:21.591 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.591 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.591 issued rwts: total=7893,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.591 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:21.591 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=76133: Fri Apr 26 15:42:51 2024 00:24:21.591 read: IOPS=2300, BW=9199KiB/s (9420kB/s)(26.6MiB/2958msec) 00:24:21.591 slat (usec): min=8, max=107, avg=20.68, stdev= 7.57 00:24:21.591 clat (usec): min=175, max=8279, avg=411.85, stdev=188.44 00:24:21.591 lat (usec): min=190, max=8293, avg=432.53, stdev=190.90 00:24:21.591 clat percentiles (usec): 00:24:21.591 | 1.00th=[ 262], 5.00th=[ 318], 10.00th=[ 338], 20.00th=[ 351], 00:24:21.591 | 30.00th=[ 359], 40.00th=[ 363], 50.00th=[ 371], 60.00th=[ 375], 00:24:21.591 | 70.00th=[ 388], 80.00th=[ 404], 90.00th=[ 619], 95.00th=[ 701], 00:24:21.591 | 99.00th=[ 824], 99.50th=[ 922], 99.90th=[ 3032], 99.95th=[ 3490], 00:24:21.591 | 99.99th=[ 8291] 00:24:21.591 bw ( KiB/s): min= 5664, max=10352, per=19.54%, avg=9012.80, stdev=1916.24, samples=5 00:24:21.591 iops : min= 1416, max= 2588, avg=2253.20, stdev=479.06, samples=5 00:24:21.591 lat (usec) : 250=0.82%, 500=85.63%, 750=9.82%, 1000=3.38% 00:24:21.591 lat (msec) : 2=0.22%, 4=0.09%, 10=0.03% 00:24:21.591 cpu : usr=1.05%, sys=4.29%, ctx=6814, majf=0, minf=1 00:24:21.591 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:24:21.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.591 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:24:21.591 issued rwts: total=6804,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:24:21.591 latency : target=0, window=0, percentile=100.00%, depth=1 00:24:21.591 00:24:21.591 Run status group 0 (all jobs): 00:24:21.591 READ: bw=45.0MiB/s (47.2MB/s), 9199KiB/s-20.6MiB/s (9420kB/s-21.6MB/s), io=174MiB (182MB), run=2958-3862msec 00:24:21.591 00:24:21.591 Disk stats (read/write): 00:24:21.591 nvme0n1: ios=8402/0, merge=0/0, ticks=3072/0, in_queue=3072, util=94.88% 00:24:21.591 nvme0n2: ios=18870/0, merge=0/0, ticks=3277/0, in_queue=3277, util=95.42% 00:24:21.591 nvme0n3: ios=7466/0, merge=0/0, ticks=2928/0, in_queue=2928, util=96.30% 00:24:21.591 nvme0n4: ios=6572/0, merge=0/0, ticks=2648/0, in_queue=2648, util=96.49% 00:24:21.591 15:42:51 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:24:21.591 15:42:51 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:24:21.849 15:42:51 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:24:21.849 15:42:51 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:24:22.107 15:42:52 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:24:22.107 15:42:52 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:24:22.365 15:42:52 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:24:22.365 15:42:52 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:24:22.622 15:42:52 -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:24:22.622 15:42:52 -- target/fio.sh@66 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:24:22.880 15:42:53 -- target/fio.sh@69 -- # fio_status=0 00:24:22.880 15:42:53 -- target/fio.sh@70 -- # wait 76090 00:24:22.880 15:42:53 -- target/fio.sh@70 -- # fio_status=4 00:24:22.880 15:42:53 -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:24:22.880 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:24:22.880 15:42:53 -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:24:22.880 15:42:53 -- common/autotest_common.sh@1205 -- # local i=0 00:24:22.880 15:42:53 -- common/autotest_common.sh@1206 -- # lsblk -o NAME,SERIAL 00:24:22.880 15:42:53 -- common/autotest_common.sh@1206 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:22.880 15:42:53 -- common/autotest_common.sh@1213 -- # lsblk -l -o NAME,SERIAL 00:24:22.880 15:42:53 -- common/autotest_common.sh@1213 -- # grep -q -w SPDKISFASTANDAWESOME 00:24:22.880 nvmf hotplug test: fio failed as expected 00:24:22.880 15:42:53 -- common/autotest_common.sh@1217 -- # return 0 00:24:22.880 15:42:53 -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:24:22.880 15:42:53 -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:24:22.880 15:42:53 -- target/fio.sh@83 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:23.136 15:42:53 -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:24:23.136 15:42:53 -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:24:23.136 15:42:53 -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:24:23.136 15:42:53 -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:24:23.136 15:42:53 -- target/fio.sh@91 -- # nvmftestfini 00:24:23.136 15:42:53 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:23.136 15:42:53 -- nvmf/common.sh@117 -- # sync 00:24:23.136 15:42:53 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:23.136 15:42:53 -- nvmf/common.sh@120 -- # set +e 00:24:23.136 15:42:53 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:23.136 15:42:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:23.136 rmmod nvme_tcp 00:24:23.392 rmmod nvme_fabrics 00:24:23.392 rmmod nvme_keyring 00:24:23.392 15:42:53 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:23.392 15:42:53 -- nvmf/common.sh@124 -- # set -e 00:24:23.392 15:42:53 -- nvmf/common.sh@125 -- # return 0 00:24:23.393 15:42:53 -- nvmf/common.sh@478 -- # '[' -n 75595 ']' 00:24:23.393 15:42:53 -- nvmf/common.sh@479 -- # killprocess 75595 00:24:23.393 15:42:53 -- common/autotest_common.sh@936 -- # '[' -z 75595 ']' 00:24:23.393 15:42:53 -- common/autotest_common.sh@940 -- # kill -0 75595 00:24:23.393 15:42:53 -- common/autotest_common.sh@941 -- # uname 00:24:23.393 15:42:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:23.393 15:42:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 75595 00:24:23.393 killing process with pid 75595 00:24:23.393 15:42:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:24:23.393 15:42:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:24:23.393 15:42:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 75595' 00:24:23.393 15:42:53 -- common/autotest_common.sh@955 -- # kill 75595 00:24:23.393 15:42:53 -- common/autotest_common.sh@960 -- # wait 75595 00:24:23.650 15:42:53 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:23.650 15:42:53 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:23.650 15:42:53 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:23.650 15:42:53 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:23.650 15:42:53 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:23.650 15:42:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:23.650 15:42:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:23.650 15:42:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:23.650 15:42:53 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:23.650 00:24:23.650 real 0m20.192s 00:24:23.650 user 1m18.222s 00:24:23.650 sys 0m8.692s 00:24:23.650 15:42:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:23.650 15:42:53 -- common/autotest_common.sh@10 -- # set +x 00:24:23.651 ************************************ 00:24:23.651 END TEST nvmf_fio_target 00:24:23.651 ************************************ 00:24:23.651 15:42:53 -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:24:23.651 15:42:53 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:23.651 15:42:53 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:23.651 15:42:53 -- common/autotest_common.sh@10 -- # set +x 00:24:23.651 ************************************ 00:24:23.651 START TEST nvmf_bdevio 00:24:23.651 ************************************ 00:24:23.651 15:42:53 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:24:23.908 * Looking for test storage... 00:24:23.908 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:23.908 15:42:54 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:23.908 15:42:54 -- nvmf/common.sh@7 -- # uname -s 00:24:23.908 15:42:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:23.908 15:42:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:23.908 15:42:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:23.908 15:42:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:23.908 15:42:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:23.908 15:42:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:23.908 15:42:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:23.908 15:42:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:23.908 15:42:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:23.908 15:42:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:23.908 15:42:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:24:23.908 15:42:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:24:23.908 15:42:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:23.908 15:42:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:23.908 15:42:54 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:23.908 15:42:54 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:23.908 15:42:54 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:23.908 15:42:54 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:23.908 15:42:54 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:23.908 15:42:54 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:23.908 15:42:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.908 15:42:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.909 15:42:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.909 15:42:54 -- paths/export.sh@5 -- # export PATH 00:24:23.909 15:42:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.909 15:42:54 -- nvmf/common.sh@47 -- # : 0 00:24:23.909 15:42:54 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:23.909 15:42:54 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:23.909 15:42:54 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:23.909 15:42:54 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:23.909 15:42:54 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:23.909 15:42:54 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:23.909 15:42:54 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:23.909 15:42:54 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:23.909 15:42:54 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:23.909 15:42:54 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:23.909 15:42:54 -- target/bdevio.sh@14 -- # nvmftestinit 00:24:23.909 15:42:54 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:23.909 15:42:54 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:23.909 15:42:54 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:23.909 15:42:54 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:23.909 15:42:54 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:23.909 15:42:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:23.909 15:42:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:23.909 15:42:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:23.909 15:42:54 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:24:23.909 15:42:54 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:24:23.909 15:42:54 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:24:23.909 15:42:54 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:24:23.909 15:42:54 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:24:23.909 15:42:54 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:24:23.909 15:42:54 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:23.909 15:42:54 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:23.909 15:42:54 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:23.909 15:42:54 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:23.909 15:42:54 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:23.909 15:42:54 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:23.909 15:42:54 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:23.909 15:42:54 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:23.909 15:42:54 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:23.909 15:42:54 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:23.909 15:42:54 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:23.909 15:42:54 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:23.909 15:42:54 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:23.909 15:42:54 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:23.909 Cannot find device "nvmf_tgt_br" 00:24:23.909 15:42:54 -- nvmf/common.sh@155 -- # true 00:24:23.909 15:42:54 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:23.909 Cannot find device "nvmf_tgt_br2" 00:24:23.909 15:42:54 -- nvmf/common.sh@156 -- # true 00:24:23.909 15:42:54 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:23.909 15:42:54 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:23.909 Cannot find device "nvmf_tgt_br" 00:24:23.909 15:42:54 -- nvmf/common.sh@158 -- # true 00:24:23.909 15:42:54 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:23.909 Cannot find device "nvmf_tgt_br2" 00:24:23.909 15:42:54 -- nvmf/common.sh@159 -- # true 00:24:23.909 15:42:54 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:23.909 15:42:54 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:23.909 15:42:54 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:23.909 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:23.909 15:42:54 -- nvmf/common.sh@162 -- # true 00:24:23.909 15:42:54 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:23.909 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:23.909 15:42:54 -- nvmf/common.sh@163 -- # true 00:24:23.909 15:42:54 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:23.909 15:42:54 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:23.909 15:42:54 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:23.909 15:42:54 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:24.167 15:42:54 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:24.167 15:42:54 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:24.167 15:42:54 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:24.167 15:42:54 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:24.167 15:42:54 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:24.167 15:42:54 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:24.167 15:42:54 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:24.167 15:42:54 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:24.167 15:42:54 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:24.167 15:42:54 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:24.167 15:42:54 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:24.167 15:42:54 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:24.167 15:42:54 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:24.167 15:42:54 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:24.167 15:42:54 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:24.167 15:42:54 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:24.167 15:42:54 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:24.167 15:42:54 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:24.167 15:42:54 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:24.167 15:42:54 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:24.167 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:24.167 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.095 ms 00:24:24.167 00:24:24.167 --- 10.0.0.2 ping statistics --- 00:24:24.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:24.167 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:24:24.167 15:42:54 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:24.167 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:24.167 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.093 ms 00:24:24.167 00:24:24.167 --- 10.0.0.3 ping statistics --- 00:24:24.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:24.167 rtt min/avg/max/mdev = 0.093/0.093/0.093/0.000 ms 00:24:24.167 15:42:54 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:24.167 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:24.167 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:24:24.167 00:24:24.167 --- 10.0.0.1 ping statistics --- 00:24:24.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:24.167 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:24:24.167 15:42:54 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:24.167 15:42:54 -- nvmf/common.sh@422 -- # return 0 00:24:24.167 15:42:54 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:24.167 15:42:54 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:24.167 15:42:54 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:24.167 15:42:54 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:24.167 15:42:54 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:24.167 15:42:54 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:24.167 15:42:54 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:24.167 15:42:54 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:24:24.167 15:42:54 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:24.167 15:42:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:24.167 15:42:54 -- common/autotest_common.sh@10 -- # set +x 00:24:24.167 15:42:54 -- nvmf/common.sh@470 -- # nvmfpid=76466 00:24:24.167 15:42:54 -- nvmf/common.sh@471 -- # waitforlisten 76466 00:24:24.167 15:42:54 -- common/autotest_common.sh@817 -- # '[' -z 76466 ']' 00:24:24.167 15:42:54 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:24:24.167 15:42:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:24.167 15:42:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:24.167 15:42:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:24.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:24.167 15:42:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:24.167 15:42:54 -- common/autotest_common.sh@10 -- # set +x 00:24:24.425 [2024-04-26 15:42:54.483178] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:24:24.425 [2024-04-26 15:42:54.483277] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:24.425 [2024-04-26 15:42:54.620462] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:24.682 [2024-04-26 15:42:54.745592] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:24.682 [2024-04-26 15:42:54.745667] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:24.682 [2024-04-26 15:42:54.745679] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:24.682 [2024-04-26 15:42:54.745688] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:24.682 [2024-04-26 15:42:54.745696] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:24.682 [2024-04-26 15:42:54.745894] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:24.682 [2024-04-26 15:42:54.745996] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:24:24.682 [2024-04-26 15:42:54.746266] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:24:24.682 [2024-04-26 15:42:54.746319] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:25.620 15:42:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:25.620 15:42:55 -- common/autotest_common.sh@850 -- # return 0 00:24:25.620 15:42:55 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:25.620 15:42:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:25.620 15:42:55 -- common/autotest_common.sh@10 -- # set +x 00:24:25.620 15:42:55 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:25.620 15:42:55 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:25.620 15:42:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.620 15:42:55 -- common/autotest_common.sh@10 -- # set +x 00:24:25.620 [2024-04-26 15:42:55.594518] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:25.620 15:42:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.620 15:42:55 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:25.620 15:42:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.620 15:42:55 -- common/autotest_common.sh@10 -- # set +x 00:24:25.620 Malloc0 00:24:25.620 15:42:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.620 15:42:55 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:25.620 15:42:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.620 15:42:55 -- common/autotest_common.sh@10 -- # set +x 00:24:25.620 15:42:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.620 15:42:55 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:25.620 15:42:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.620 15:42:55 -- common/autotest_common.sh@10 -- # set +x 00:24:25.620 15:42:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.620 15:42:55 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:25.620 15:42:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:25.620 15:42:55 -- common/autotest_common.sh@10 -- # set +x 00:24:25.620 [2024-04-26 15:42:55.664304] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:25.620 15:42:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:25.620 15:42:55 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:24:25.620 15:42:55 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:24:25.620 15:42:55 -- nvmf/common.sh@521 -- # config=() 00:24:25.620 15:42:55 -- nvmf/common.sh@521 -- # local subsystem config 00:24:25.620 15:42:55 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:25.620 15:42:55 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:25.620 { 00:24:25.620 "params": { 00:24:25.620 "name": "Nvme$subsystem", 00:24:25.620 "trtype": "$TEST_TRANSPORT", 00:24:25.620 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:25.620 "adrfam": "ipv4", 00:24:25.620 "trsvcid": "$NVMF_PORT", 00:24:25.620 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:25.620 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:25.620 "hdgst": ${hdgst:-false}, 00:24:25.620 "ddgst": ${ddgst:-false} 00:24:25.620 }, 00:24:25.620 "method": "bdev_nvme_attach_controller" 00:24:25.620 } 00:24:25.620 EOF 00:24:25.620 )") 00:24:25.620 15:42:55 -- nvmf/common.sh@543 -- # cat 00:24:25.620 15:42:55 -- nvmf/common.sh@545 -- # jq . 00:24:25.620 15:42:55 -- nvmf/common.sh@546 -- # IFS=, 00:24:25.620 15:42:55 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:24:25.620 "params": { 00:24:25.620 "name": "Nvme1", 00:24:25.620 "trtype": "tcp", 00:24:25.620 "traddr": "10.0.0.2", 00:24:25.620 "adrfam": "ipv4", 00:24:25.620 "trsvcid": "4420", 00:24:25.620 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:25.620 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:25.620 "hdgst": false, 00:24:25.620 "ddgst": false 00:24:25.620 }, 00:24:25.620 "method": "bdev_nvme_attach_controller" 00:24:25.620 }' 00:24:25.620 [2024-04-26 15:42:55.727158] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:24:25.620 [2024-04-26 15:42:55.727277] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76520 ] 00:24:25.620 [2024-04-26 15:42:55.872007] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:25.878 [2024-04-26 15:42:55.994425] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:25.878 [2024-04-26 15:42:55.994524] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:25.878 [2024-04-26 15:42:55.994528] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:26.136 I/O targets: 00:24:26.136 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:24:26.136 00:24:26.136 00:24:26.136 CUnit - A unit testing framework for C - Version 2.1-3 00:24:26.136 http://cunit.sourceforge.net/ 00:24:26.136 00:24:26.136 00:24:26.136 Suite: bdevio tests on: Nvme1n1 00:24:26.136 Test: blockdev write read block ...passed 00:24:26.136 Test: blockdev write zeroes read block ...passed 00:24:26.136 Test: blockdev write zeroes read no split ...passed 00:24:26.136 Test: blockdev write zeroes read split ...passed 00:24:26.136 Test: blockdev write zeroes read split partial ...passed 00:24:26.136 Test: blockdev reset ...[2024-04-26 15:42:56.336074] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:26.136 [2024-04-26 15:42:56.336207] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb0d610 (9): Bad file descriptor 00:24:26.136 [2024-04-26 15:42:56.350674] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:26.136 passed 00:24:26.136 Test: blockdev write read 8 blocks ...passed 00:24:26.136 Test: blockdev write read size > 128k ...passed 00:24:26.136 Test: blockdev write read invalid size ...passed 00:24:26.136 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:26.136 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:26.136 Test: blockdev write read max offset ...passed 00:24:26.393 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:26.393 Test: blockdev writev readv 8 blocks ...passed 00:24:26.393 Test: blockdev writev readv 30 x 1block ...passed 00:24:26.393 Test: blockdev writev readv block ...passed 00:24:26.393 Test: blockdev writev readv size > 128k ...passed 00:24:26.393 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:26.393 Test: blockdev comparev and writev ...[2024-04-26 15:42:56.525676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:26.393 [2024-04-26 15:42:56.525996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.393 [2024-04-26 15:42:56.526116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:26.393 [2024-04-26 15:42:56.526250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:26.393 [2024-04-26 15:42:56.526658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:26.393 [2024-04-26 15:42:56.526806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:26.393 [2024-04-26 15:42:56.526906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:26.393 [2024-04-26 15:42:56.526997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:26.393 [2024-04-26 15:42:56.527431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:26.393 [2024-04-26 15:42:56.527548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:26.393 [2024-04-26 15:42:56.527641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:26.393 [2024-04-26 15:42:56.527749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:26.393 [2024-04-26 15:42:56.528111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:26.393 [2024-04-26 15:42:56.528240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:26.394 [2024-04-26 15:42:56.528342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:26.394 [2024-04-26 15:42:56.528465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:26.394 passed 00:24:26.394 Test: blockdev nvme passthru rw ...passed 00:24:26.394 Test: blockdev nvme passthru vendor specific ...[2024-04-26 15:42:56.610790] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:26.394 [2024-04-26 15:42:56.611027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:26.394 [2024-04-26 15:42:56.611284] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:26.394 [2024-04-26 15:42:56.611391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:26.394 [2024-04-26 15:42:56.611581] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:26.394 [2024-04-26 15:42:56.611679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:26.394 [2024-04-26 15:42:56.611872] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:26.394 [2024-04-26 15:42:56.611955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:26.394 passed 00:24:26.394 Test: blockdev nvme admin passthru ...passed 00:24:26.394 Test: blockdev copy ...passed 00:24:26.394 00:24:26.394 Run Summary: Type Total Ran Passed Failed Inactive 00:24:26.394 suites 1 1 n/a 0 0 00:24:26.394 tests 23 23 23 0 0 00:24:26.394 asserts 152 152 152 0 n/a 00:24:26.394 00:24:26.394 Elapsed time = 0.987 seconds 00:24:26.743 15:42:56 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:26.744 15:42:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:26.744 15:42:56 -- common/autotest_common.sh@10 -- # set +x 00:24:26.744 15:42:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:26.744 15:42:56 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:24:26.744 15:42:56 -- target/bdevio.sh@30 -- # nvmftestfini 00:24:26.744 15:42:56 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:26.744 15:42:56 -- nvmf/common.sh@117 -- # sync 00:24:26.744 15:42:56 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:26.744 15:42:56 -- nvmf/common.sh@120 -- # set +e 00:24:26.744 15:42:56 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:26.744 15:42:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:26.744 rmmod nvme_tcp 00:24:26.744 rmmod nvme_fabrics 00:24:26.744 rmmod nvme_keyring 00:24:26.744 15:42:56 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:26.744 15:42:56 -- nvmf/common.sh@124 -- # set -e 00:24:26.744 15:42:56 -- nvmf/common.sh@125 -- # return 0 00:24:26.744 15:42:56 -- nvmf/common.sh@478 -- # '[' -n 76466 ']' 00:24:26.744 15:42:56 -- nvmf/common.sh@479 -- # killprocess 76466 00:24:26.744 15:42:56 -- common/autotest_common.sh@936 -- # '[' -z 76466 ']' 00:24:26.744 15:42:56 -- common/autotest_common.sh@940 -- # kill -0 76466 00:24:26.744 15:42:56 -- common/autotest_common.sh@941 -- # uname 00:24:26.744 15:42:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:26.744 15:42:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76466 00:24:26.744 15:42:57 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:24:26.744 15:42:57 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:24:26.744 killing process with pid 76466 00:24:26.744 15:42:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76466' 00:24:26.744 15:42:57 -- common/autotest_common.sh@955 -- # kill 76466 00:24:26.744 15:42:57 -- common/autotest_common.sh@960 -- # wait 76466 00:24:27.314 15:42:57 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:27.314 15:42:57 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:27.314 15:42:57 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:27.314 15:42:57 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:27.314 15:42:57 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:27.314 15:42:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:27.314 15:42:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:27.314 15:42:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:27.314 15:42:57 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:27.314 00:24:27.314 real 0m3.438s 00:24:27.314 user 0m12.348s 00:24:27.314 sys 0m0.807s 00:24:27.314 15:42:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:27.314 15:42:57 -- common/autotest_common.sh@10 -- # set +x 00:24:27.314 ************************************ 00:24:27.314 END TEST nvmf_bdevio 00:24:27.314 ************************************ 00:24:27.314 15:42:57 -- nvmf/nvmf.sh@58 -- # '[' tcp = tcp ']' 00:24:27.314 15:42:57 -- nvmf/nvmf.sh@59 -- # run_test nvmf_bdevio_no_huge /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:24:27.314 15:42:57 -- common/autotest_common.sh@1087 -- # '[' 4 -le 1 ']' 00:24:27.314 15:42:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:27.314 15:42:57 -- common/autotest_common.sh@10 -- # set +x 00:24:27.314 ************************************ 00:24:27.314 START TEST nvmf_bdevio_no_huge 00:24:27.314 ************************************ 00:24:27.314 15:42:57 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:24:27.314 * Looking for test storage... 00:24:27.314 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:27.314 15:42:57 -- target/bdevio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:27.314 15:42:57 -- nvmf/common.sh@7 -- # uname -s 00:24:27.314 15:42:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:27.314 15:42:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:27.314 15:42:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:27.314 15:42:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:27.314 15:42:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:27.314 15:42:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:27.314 15:42:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:27.314 15:42:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:27.314 15:42:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:27.314 15:42:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:27.314 15:42:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:24:27.314 15:42:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:24:27.314 15:42:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:27.314 15:42:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:27.314 15:42:57 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:27.314 15:42:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:27.314 15:42:57 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:27.314 15:42:57 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:27.314 15:42:57 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:27.314 15:42:57 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:27.314 15:42:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.314 15:42:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.314 15:42:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.314 15:42:57 -- paths/export.sh@5 -- # export PATH 00:24:27.314 15:42:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:27.314 15:42:57 -- nvmf/common.sh@47 -- # : 0 00:24:27.314 15:42:57 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:27.314 15:42:57 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:27.314 15:42:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:27.314 15:42:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:27.314 15:42:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:27.314 15:42:57 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:27.314 15:42:57 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:27.314 15:42:57 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:27.314 15:42:57 -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:27.314 15:42:57 -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:27.314 15:42:57 -- target/bdevio.sh@14 -- # nvmftestinit 00:24:27.314 15:42:57 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:27.314 15:42:57 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:27.314 15:42:57 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:27.314 15:42:57 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:27.314 15:42:57 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:27.314 15:42:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:27.314 15:42:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:27.314 15:42:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:27.314 15:42:57 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:24:27.314 15:42:57 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:24:27.314 15:42:57 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:24:27.314 15:42:57 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:24:27.314 15:42:57 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:24:27.314 15:42:57 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:24:27.314 15:42:57 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:27.314 15:42:57 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:27.314 15:42:57 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:27.314 15:42:57 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:27.314 15:42:57 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:27.314 15:42:57 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:27.314 15:42:57 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:27.314 15:42:57 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:27.314 15:42:57 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:27.314 15:42:57 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:27.314 15:42:57 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:27.314 15:42:57 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:27.314 15:42:57 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:27.314 15:42:57 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:27.572 Cannot find device "nvmf_tgt_br" 00:24:27.572 15:42:57 -- nvmf/common.sh@155 -- # true 00:24:27.572 15:42:57 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:27.572 Cannot find device "nvmf_tgt_br2" 00:24:27.572 15:42:57 -- nvmf/common.sh@156 -- # true 00:24:27.572 15:42:57 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:27.572 15:42:57 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:27.572 Cannot find device "nvmf_tgt_br" 00:24:27.572 15:42:57 -- nvmf/common.sh@158 -- # true 00:24:27.572 15:42:57 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:27.572 Cannot find device "nvmf_tgt_br2" 00:24:27.572 15:42:57 -- nvmf/common.sh@159 -- # true 00:24:27.572 15:42:57 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:27.572 15:42:57 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:27.572 15:42:57 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:27.572 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:27.572 15:42:57 -- nvmf/common.sh@162 -- # true 00:24:27.572 15:42:57 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:27.572 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:27.572 15:42:57 -- nvmf/common.sh@163 -- # true 00:24:27.572 15:42:57 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:27.572 15:42:57 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:27.572 15:42:57 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:27.572 15:42:57 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:27.572 15:42:57 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:27.572 15:42:57 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:27.572 15:42:57 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:27.572 15:42:57 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:27.572 15:42:57 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:27.572 15:42:57 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:27.572 15:42:57 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:27.572 15:42:57 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:27.572 15:42:57 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:27.572 15:42:57 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:27.572 15:42:57 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:27.572 15:42:57 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:27.572 15:42:57 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:27.572 15:42:57 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:27.572 15:42:57 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:27.572 15:42:57 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:27.831 15:42:57 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:27.831 15:42:57 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:27.831 15:42:57 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:27.831 15:42:57 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:27.831 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:27.831 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.110 ms 00:24:27.831 00:24:27.831 --- 10.0.0.2 ping statistics --- 00:24:27.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:27.831 rtt min/avg/max/mdev = 0.110/0.110/0.110/0.000 ms 00:24:27.831 15:42:57 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:27.831 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:27.831 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:24:27.831 00:24:27.831 --- 10.0.0.3 ping statistics --- 00:24:27.831 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:27.832 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:24:27.832 15:42:57 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:27.832 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:27.832 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.020 ms 00:24:27.832 00:24:27.832 --- 10.0.0.1 ping statistics --- 00:24:27.832 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:27.832 rtt min/avg/max/mdev = 0.020/0.020/0.020/0.000 ms 00:24:27.832 15:42:57 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:27.832 15:42:57 -- nvmf/common.sh@422 -- # return 0 00:24:27.832 15:42:57 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:27.832 15:42:57 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:27.832 15:42:57 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:27.832 15:42:57 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:27.832 15:42:57 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:27.832 15:42:57 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:27.832 15:42:57 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:27.832 15:42:57 -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:24:27.832 15:42:57 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:27.832 15:42:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:27.832 15:42:57 -- common/autotest_common.sh@10 -- # set +x 00:24:27.832 15:42:57 -- nvmf/common.sh@470 -- # nvmfpid=76708 00:24:27.832 15:42:57 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:24:27.832 15:42:57 -- nvmf/common.sh@471 -- # waitforlisten 76708 00:24:27.832 15:42:57 -- common/autotest_common.sh@817 -- # '[' -z 76708 ']' 00:24:27.832 15:42:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:27.832 15:42:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:27.832 15:42:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:27.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:27.832 15:42:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:27.832 15:42:57 -- common/autotest_common.sh@10 -- # set +x 00:24:27.832 [2024-04-26 15:42:57.968055] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:24:27.832 [2024-04-26 15:42:57.968206] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:24:27.832 [2024-04-26 15:42:58.115009] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:28.093 [2024-04-26 15:42:58.244732] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:28.093 [2024-04-26 15:42:58.244843] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:28.093 [2024-04-26 15:42:58.244855] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:28.093 [2024-04-26 15:42:58.244863] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:28.093 [2024-04-26 15:42:58.244870] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:28.093 [2024-04-26 15:42:58.245033] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 4 00:24:28.093 [2024-04-26 15:42:58.245899] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 5 00:24:28.093 [2024-04-26 15:42:58.246011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 6 00:24:28.093 [2024-04-26 15:42:58.246018] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:24:28.660 15:42:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:28.660 15:42:58 -- common/autotest_common.sh@850 -- # return 0 00:24:28.660 15:42:58 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:28.660 15:42:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:28.661 15:42:58 -- common/autotest_common.sh@10 -- # set +x 00:24:28.918 15:42:58 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:28.918 15:42:58 -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:28.918 15:42:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.918 15:42:58 -- common/autotest_common.sh@10 -- # set +x 00:24:28.918 [2024-04-26 15:42:58.994779] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:28.918 15:42:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.918 15:42:59 -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:28.918 15:42:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.918 15:42:59 -- common/autotest_common.sh@10 -- # set +x 00:24:28.918 Malloc0 00:24:28.918 15:42:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.918 15:42:59 -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:28.918 15:42:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.918 15:42:59 -- common/autotest_common.sh@10 -- # set +x 00:24:28.918 15:42:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.919 15:42:59 -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:28.919 15:42:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.919 15:42:59 -- common/autotest_common.sh@10 -- # set +x 00:24:28.919 15:42:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.919 15:42:59 -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:28.919 15:42:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:28.919 15:42:59 -- common/autotest_common.sh@10 -- # set +x 00:24:28.919 [2024-04-26 15:42:59.043185] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:28.919 15:42:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:28.919 15:42:59 -- target/bdevio.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:24:28.919 15:42:59 -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:24:28.919 15:42:59 -- nvmf/common.sh@521 -- # config=() 00:24:28.919 15:42:59 -- nvmf/common.sh@521 -- # local subsystem config 00:24:28.919 15:42:59 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:24:28.919 15:42:59 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:24:28.919 { 00:24:28.919 "params": { 00:24:28.919 "name": "Nvme$subsystem", 00:24:28.919 "trtype": "$TEST_TRANSPORT", 00:24:28.919 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:28.919 "adrfam": "ipv4", 00:24:28.919 "trsvcid": "$NVMF_PORT", 00:24:28.919 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:28.919 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:28.919 "hdgst": ${hdgst:-false}, 00:24:28.919 "ddgst": ${ddgst:-false} 00:24:28.919 }, 00:24:28.919 "method": "bdev_nvme_attach_controller" 00:24:28.919 } 00:24:28.919 EOF 00:24:28.919 )") 00:24:28.919 15:42:59 -- nvmf/common.sh@543 -- # cat 00:24:28.919 15:42:59 -- nvmf/common.sh@545 -- # jq . 00:24:28.919 15:42:59 -- nvmf/common.sh@546 -- # IFS=, 00:24:28.919 15:42:59 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:24:28.919 "params": { 00:24:28.919 "name": "Nvme1", 00:24:28.919 "trtype": "tcp", 00:24:28.919 "traddr": "10.0.0.2", 00:24:28.919 "adrfam": "ipv4", 00:24:28.919 "trsvcid": "4420", 00:24:28.919 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:28.919 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:28.919 "hdgst": false, 00:24:28.919 "ddgst": false 00:24:28.919 }, 00:24:28.919 "method": "bdev_nvme_attach_controller" 00:24:28.919 }' 00:24:28.919 [2024-04-26 15:42:59.100195] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:24:28.919 [2024-04-26 15:42:59.100307] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid76762 ] 00:24:29.193 [2024-04-26 15:42:59.249681] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:29.193 [2024-04-26 15:42:59.397336] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:29.193 [2024-04-26 15:42:59.397410] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:29.193 [2024-04-26 15:42:59.397624] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:24:29.450 I/O targets: 00:24:29.450 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:24:29.450 00:24:29.450 00:24:29.450 CUnit - A unit testing framework for C - Version 2.1-3 00:24:29.450 http://cunit.sourceforge.net/ 00:24:29.450 00:24:29.450 00:24:29.450 Suite: bdevio tests on: Nvme1n1 00:24:29.450 Test: blockdev write read block ...passed 00:24:29.450 Test: blockdev write zeroes read block ...passed 00:24:29.450 Test: blockdev write zeroes read no split ...passed 00:24:29.450 Test: blockdev write zeroes read split ...passed 00:24:29.450 Test: blockdev write zeroes read split partial ...passed 00:24:29.450 Test: blockdev reset ...[2024-04-26 15:42:59.709806] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:29.450 [2024-04-26 15:42:59.710235] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x975180 (9): Bad file descriptor 00:24:29.450 [2024-04-26 15:42:59.729406] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:29.450 passed 00:24:29.450 Test: blockdev write read 8 blocks ...passed 00:24:29.450 Test: blockdev write read size > 128k ...passed 00:24:29.450 Test: blockdev write read invalid size ...passed 00:24:29.708 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:24:29.708 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:24:29.708 Test: blockdev write read max offset ...passed 00:24:29.708 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:24:29.708 Test: blockdev writev readv 8 blocks ...passed 00:24:29.708 Test: blockdev writev readv 30 x 1block ...passed 00:24:29.708 Test: blockdev writev readv block ...passed 00:24:29.708 Test: blockdev writev readv size > 128k ...passed 00:24:29.708 Test: blockdev writev readv size > 128k in two iovs ...passed 00:24:29.708 Test: blockdev comparev and writev ...[2024-04-26 15:42:59.901416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:29.708 [2024-04-26 15:42:59.901478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:29.708 [2024-04-26 15:42:59.901498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:29.708 [2024-04-26 15:42:59.901509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:29.708 [2024-04-26 15:42:59.901886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:29.708 [2024-04-26 15:42:59.901908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:29.708 [2024-04-26 15:42:59.901925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:29.708 [2024-04-26 15:42:59.901934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:29.708 [2024-04-26 15:42:59.902289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:29.708 [2024-04-26 15:42:59.902309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:29.708 [2024-04-26 15:42:59.902326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:29.708 [2024-04-26 15:42:59.902336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:29.708 [2024-04-26 15:42:59.902698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:29.708 [2024-04-26 15:42:59.902725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:29.708 [2024-04-26 15:42:59.902742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:24:29.708 [2024-04-26 15:42:59.902752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:29.708 passed 00:24:29.708 Test: blockdev nvme passthru rw ...passed 00:24:29.708 Test: blockdev nvme passthru vendor specific ...[2024-04-26 15:42:59.985624] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:29.708 [2024-04-26 15:42:59.985683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:29.708 [2024-04-26 15:42:59.985835] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:29.708 [2024-04-26 15:42:59.985850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:29.708 [2024-04-26 15:42:59.985990] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:29.708 [2024-04-26 15:42:59.986015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:29.708 [2024-04-26 15:42:59.986150] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:24:29.708 [2024-04-26 15:42:59.986167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:29.708 passed 00:24:29.708 Test: blockdev nvme admin passthru ...passed 00:24:29.967 Test: blockdev copy ...passed 00:24:29.967 00:24:29.967 Run Summary: Type Total Ran Passed Failed Inactive 00:24:29.967 suites 1 1 n/a 0 0 00:24:29.967 tests 23 23 23 0 0 00:24:29.967 asserts 152 152 152 0 n/a 00:24:29.967 00:24:29.967 Elapsed time = 0.925 seconds 00:24:30.225 15:43:00 -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:30.225 15:43:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:24:30.225 15:43:00 -- common/autotest_common.sh@10 -- # set +x 00:24:30.225 15:43:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:24:30.225 15:43:00 -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:24:30.225 15:43:00 -- target/bdevio.sh@30 -- # nvmftestfini 00:24:30.225 15:43:00 -- nvmf/common.sh@477 -- # nvmfcleanup 00:24:30.225 15:43:00 -- nvmf/common.sh@117 -- # sync 00:24:30.484 15:43:00 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:30.484 15:43:00 -- nvmf/common.sh@120 -- # set +e 00:24:30.484 15:43:00 -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:30.484 15:43:00 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:30.484 rmmod nvme_tcp 00:24:30.484 rmmod nvme_fabrics 00:24:30.484 rmmod nvme_keyring 00:24:30.484 15:43:00 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:30.484 15:43:00 -- nvmf/common.sh@124 -- # set -e 00:24:30.484 15:43:00 -- nvmf/common.sh@125 -- # return 0 00:24:30.484 15:43:00 -- nvmf/common.sh@478 -- # '[' -n 76708 ']' 00:24:30.484 15:43:00 -- nvmf/common.sh@479 -- # killprocess 76708 00:24:30.484 15:43:00 -- common/autotest_common.sh@936 -- # '[' -z 76708 ']' 00:24:30.484 15:43:00 -- common/autotest_common.sh@940 -- # kill -0 76708 00:24:30.484 15:43:00 -- common/autotest_common.sh@941 -- # uname 00:24:30.484 15:43:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:24:30.484 15:43:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76708 00:24:30.484 killing process with pid 76708 00:24:30.484 15:43:00 -- common/autotest_common.sh@942 -- # process_name=reactor_3 00:24:30.484 15:43:00 -- common/autotest_common.sh@946 -- # '[' reactor_3 = sudo ']' 00:24:30.484 15:43:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76708' 00:24:30.484 15:43:00 -- common/autotest_common.sh@955 -- # kill 76708 00:24:30.484 15:43:00 -- common/autotest_common.sh@960 -- # wait 76708 00:24:31.050 15:43:01 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:24:31.050 15:43:01 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:24:31.050 15:43:01 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:24:31.050 15:43:01 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:31.050 15:43:01 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:31.050 15:43:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:31.050 15:43:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:31.050 15:43:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.050 15:43:01 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:24:31.050 ************************************ 00:24:31.050 END TEST nvmf_bdevio_no_huge 00:24:31.050 ************************************ 00:24:31.050 00:24:31.050 real 0m3.620s 00:24:31.050 user 0m13.107s 00:24:31.050 sys 0m1.366s 00:24:31.050 15:43:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:24:31.050 15:43:01 -- common/autotest_common.sh@10 -- # set +x 00:24:31.050 15:43:01 -- nvmf/nvmf.sh@60 -- # run_test nvmf_tls /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:24:31.050 15:43:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:24:31.050 15:43:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:24:31.050 15:43:01 -- common/autotest_common.sh@10 -- # set +x 00:24:31.050 ************************************ 00:24:31.050 START TEST nvmf_tls 00:24:31.050 ************************************ 00:24:31.050 15:43:01 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/tls.sh --transport=tcp 00:24:31.050 * Looking for test storage... 00:24:31.050 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:24:31.050 15:43:01 -- target/tls.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:24:31.050 15:43:01 -- nvmf/common.sh@7 -- # uname -s 00:24:31.050 15:43:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:31.050 15:43:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:31.050 15:43:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:31.051 15:43:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:31.051 15:43:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:31.051 15:43:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:31.051 15:43:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:31.051 15:43:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:31.051 15:43:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:31.051 15:43:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:31.051 15:43:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:24:31.051 15:43:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:24:31.051 15:43:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:31.051 15:43:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:31.051 15:43:01 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:24:31.051 15:43:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:31.051 15:43:01 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:24:31.051 15:43:01 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:31.051 15:43:01 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:31.051 15:43:01 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:31.051 15:43:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.051 15:43:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.051 15:43:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.051 15:43:01 -- paths/export.sh@5 -- # export PATH 00:24:31.051 15:43:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.051 15:43:01 -- nvmf/common.sh@47 -- # : 0 00:24:31.051 15:43:01 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:31.051 15:43:01 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:31.051 15:43:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:31.051 15:43:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:31.051 15:43:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:31.051 15:43:01 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:31.051 15:43:01 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:31.051 15:43:01 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:31.051 15:43:01 -- target/tls.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:24:31.051 15:43:01 -- target/tls.sh@62 -- # nvmftestinit 00:24:31.051 15:43:01 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:24:31.051 15:43:01 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:31.051 15:43:01 -- nvmf/common.sh@437 -- # prepare_net_devs 00:24:31.051 15:43:01 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:24:31.051 15:43:01 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:24:31.051 15:43:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:31.051 15:43:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:31.051 15:43:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.051 15:43:01 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:24:31.051 15:43:01 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:24:31.051 15:43:01 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:24:31.051 15:43:01 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:24:31.051 15:43:01 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:24:31.051 15:43:01 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:24:31.051 15:43:01 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:31.051 15:43:01 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:31.051 15:43:01 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:24:31.051 15:43:01 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:24:31.051 15:43:01 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:24:31.051 15:43:01 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:24:31.051 15:43:01 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:24:31.051 15:43:01 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:31.051 15:43:01 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:24:31.051 15:43:01 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:24:31.051 15:43:01 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:24:31.051 15:43:01 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:24:31.051 15:43:01 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:24:31.051 15:43:01 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:24:31.051 Cannot find device "nvmf_tgt_br" 00:24:31.051 15:43:01 -- nvmf/common.sh@155 -- # true 00:24:31.051 15:43:01 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:24:31.320 Cannot find device "nvmf_tgt_br2" 00:24:31.320 15:43:01 -- nvmf/common.sh@156 -- # true 00:24:31.320 15:43:01 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:24:31.320 15:43:01 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:24:31.320 Cannot find device "nvmf_tgt_br" 00:24:31.320 15:43:01 -- nvmf/common.sh@158 -- # true 00:24:31.320 15:43:01 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:24:31.320 Cannot find device "nvmf_tgt_br2" 00:24:31.320 15:43:01 -- nvmf/common.sh@159 -- # true 00:24:31.320 15:43:01 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:24:31.320 15:43:01 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:24:31.320 15:43:01 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:24:31.320 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:31.320 15:43:01 -- nvmf/common.sh@162 -- # true 00:24:31.320 15:43:01 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:24:31.320 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:24:31.320 15:43:01 -- nvmf/common.sh@163 -- # true 00:24:31.320 15:43:01 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:24:31.320 15:43:01 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:24:31.320 15:43:01 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:24:31.320 15:43:01 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:24:31.320 15:43:01 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:24:31.320 15:43:01 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:24:31.320 15:43:01 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:24:31.320 15:43:01 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:24:31.320 15:43:01 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:24:31.320 15:43:01 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:24:31.320 15:43:01 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:24:31.320 15:43:01 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:24:31.320 15:43:01 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:24:31.320 15:43:01 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:24:31.320 15:43:01 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:24:31.320 15:43:01 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:24:31.320 15:43:01 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:24:31.320 15:43:01 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:24:31.320 15:43:01 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:24:31.320 15:43:01 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:24:31.320 15:43:01 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:24:31.320 15:43:01 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:24:31.320 15:43:01 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:24:31.320 15:43:01 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:24:31.320 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:31.320 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.084 ms 00:24:31.320 00:24:31.320 --- 10.0.0.2 ping statistics --- 00:24:31.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:31.320 rtt min/avg/max/mdev = 0.084/0.084/0.084/0.000 ms 00:24:31.320 15:43:01 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:24:31.578 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:24:31.578 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.054 ms 00:24:31.578 00:24:31.578 --- 10.0.0.3 ping statistics --- 00:24:31.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:31.578 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:24:31.578 15:43:01 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:24:31.578 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:31.578 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.037 ms 00:24:31.578 00:24:31.578 --- 10.0.0.1 ping statistics --- 00:24:31.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:31.578 rtt min/avg/max/mdev = 0.037/0.037/0.037/0.000 ms 00:24:31.578 15:43:01 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:31.578 15:43:01 -- nvmf/common.sh@422 -- # return 0 00:24:31.578 15:43:01 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:24:31.578 15:43:01 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:31.578 15:43:01 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:24:31.578 15:43:01 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:24:31.578 15:43:01 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:31.578 15:43:01 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:24:31.578 15:43:01 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:24:31.578 15:43:01 -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:24:31.578 15:43:01 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:24:31.578 15:43:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:24:31.578 15:43:01 -- common/autotest_common.sh@10 -- # set +x 00:24:31.578 15:43:01 -- nvmf/common.sh@470 -- # nvmfpid=76959 00:24:31.578 15:43:01 -- nvmf/common.sh@471 -- # waitforlisten 76959 00:24:31.578 15:43:01 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:24:31.578 15:43:01 -- common/autotest_common.sh@817 -- # '[' -z 76959 ']' 00:24:31.578 15:43:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:31.578 15:43:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:31.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:31.578 15:43:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:31.578 15:43:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:31.578 15:43:01 -- common/autotest_common.sh@10 -- # set +x 00:24:31.578 [2024-04-26 15:43:01.688055] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:24:31.578 [2024-04-26 15:43:01.688182] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:31.578 [2024-04-26 15:43:01.827907] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:31.837 [2024-04-26 15:43:01.953956] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:31.837 [2024-04-26 15:43:01.954027] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:31.837 [2024-04-26 15:43:01.954050] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:31.837 [2024-04-26 15:43:01.954060] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:31.837 [2024-04-26 15:43:01.954070] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:31.837 [2024-04-26 15:43:01.954110] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:24:32.403 15:43:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:32.403 15:43:02 -- common/autotest_common.sh@850 -- # return 0 00:24:32.403 15:43:02 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:24:32.403 15:43:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:24:32.403 15:43:02 -- common/autotest_common.sh@10 -- # set +x 00:24:32.661 15:43:02 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:32.661 15:43:02 -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:24:32.661 15:43:02 -- target/tls.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:24:32.919 true 00:24:32.919 15:43:03 -- target/tls.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:32.919 15:43:03 -- target/tls.sh@73 -- # jq -r .tls_version 00:24:33.178 15:43:03 -- target/tls.sh@73 -- # version=0 00:24:33.178 15:43:03 -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:24:33.178 15:43:03 -- target/tls.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:24:33.435 15:43:03 -- target/tls.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:33.435 15:43:03 -- target/tls.sh@81 -- # jq -r .tls_version 00:24:33.760 15:43:03 -- target/tls.sh@81 -- # version=13 00:24:33.761 15:43:03 -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:24:33.761 15:43:03 -- target/tls.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:24:34.020 15:43:04 -- target/tls.sh@89 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:34.020 15:43:04 -- target/tls.sh@89 -- # jq -r .tls_version 00:24:34.278 15:43:04 -- target/tls.sh@89 -- # version=7 00:24:34.278 15:43:04 -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:24:34.278 15:43:04 -- target/tls.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:34.278 15:43:04 -- target/tls.sh@96 -- # jq -r .enable_ktls 00:24:34.536 15:43:04 -- target/tls.sh@96 -- # ktls=false 00:24:34.536 15:43:04 -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:24:34.536 15:43:04 -- target/tls.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:24:34.794 15:43:04 -- target/tls.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:34.794 15:43:04 -- target/tls.sh@104 -- # jq -r .enable_ktls 00:24:35.052 15:43:05 -- target/tls.sh@104 -- # ktls=true 00:24:35.052 15:43:05 -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:24:35.052 15:43:05 -- target/tls.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:24:35.310 15:43:05 -- target/tls.sh@112 -- # jq -r .enable_ktls 00:24:35.310 15:43:05 -- target/tls.sh@112 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:24:35.568 15:43:05 -- target/tls.sh@112 -- # ktls=false 00:24:35.568 15:43:05 -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:24:35.568 15:43:05 -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:24:35.568 15:43:05 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:24:35.568 15:43:05 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:35.568 15:43:05 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:24:35.568 15:43:05 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:24:35.568 15:43:05 -- nvmf/common.sh@693 -- # digest=1 00:24:35.568 15:43:05 -- nvmf/common.sh@694 -- # python - 00:24:35.568 15:43:05 -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:35.568 15:43:05 -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:24:35.568 15:43:05 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:24:35.568 15:43:05 -- nvmf/common.sh@691 -- # local prefix key digest 00:24:35.568 15:43:05 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:24:35.568 15:43:05 -- nvmf/common.sh@693 -- # key=ffeeddccbbaa99887766554433221100 00:24:35.568 15:43:05 -- nvmf/common.sh@693 -- # digest=1 00:24:35.568 15:43:05 -- nvmf/common.sh@694 -- # python - 00:24:35.568 15:43:05 -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:24:35.568 15:43:05 -- target/tls.sh@121 -- # mktemp 00:24:35.568 15:43:05 -- target/tls.sh@121 -- # key_path=/tmp/tmp.FZ0eNWGbVp 00:24:35.568 15:43:05 -- target/tls.sh@122 -- # mktemp 00:24:35.568 15:43:05 -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.6RSIeK5tPi 00:24:35.568 15:43:05 -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:24:35.568 15:43:05 -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:24:35.568 15:43:05 -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.FZ0eNWGbVp 00:24:35.568 15:43:05 -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.6RSIeK5tPi 00:24:35.568 15:43:05 -- target/tls.sh@130 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:24:35.826 15:43:06 -- target/tls.sh@131 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:24:36.394 15:43:06 -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.FZ0eNWGbVp 00:24:36.394 15:43:06 -- target/tls.sh@49 -- # local key=/tmp/tmp.FZ0eNWGbVp 00:24:36.394 15:43:06 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:24:36.653 [2024-04-26 15:43:06.763744] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:36.653 15:43:06 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:24:36.939 15:43:07 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:24:37.197 [2024-04-26 15:43:07.315866] tcp.c: 926:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:37.197 [2024-04-26 15:43:07.316098] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:37.197 15:43:07 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:24:37.455 malloc0 00:24:37.455 15:43:07 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:24:37.713 15:43:07 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.FZ0eNWGbVp 00:24:37.971 [2024-04-26 15:43:08.031553] tcp.c:3655:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:24:37.971 15:43:08 -- target/tls.sh@137 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.FZ0eNWGbVp 00:24:50.165 Initializing NVMe Controllers 00:24:50.165 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:50.165 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:50.165 Initialization complete. Launching workers. 00:24:50.165 ======================================================== 00:24:50.165 Latency(us) 00:24:50.165 Device Information : IOPS MiB/s Average min max 00:24:50.165 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8831.89 34.50 7248.25 2392.47 10934.12 00:24:50.165 ======================================================== 00:24:50.165 Total : 8831.89 34.50 7248.25 2392.47 10934.12 00:24:50.165 00:24:50.165 15:43:18 -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.FZ0eNWGbVp 00:24:50.165 15:43:18 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:24:50.165 15:43:18 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:24:50.165 15:43:18 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:24:50.165 15:43:18 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.FZ0eNWGbVp' 00:24:50.165 15:43:18 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:50.165 15:43:18 -- target/tls.sh@28 -- # bdevperf_pid=77319 00:24:50.165 15:43:18 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:50.165 15:43:18 -- target/tls.sh@31 -- # waitforlisten 77319 /var/tmp/bdevperf.sock 00:24:50.165 15:43:18 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:50.165 15:43:18 -- common/autotest_common.sh@817 -- # '[' -z 77319 ']' 00:24:50.165 15:43:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:50.165 15:43:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:24:50.165 15:43:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:50.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:50.165 15:43:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:24:50.165 15:43:18 -- common/autotest_common.sh@10 -- # set +x 00:24:50.165 [2024-04-26 15:43:18.295871] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:24:50.165 [2024-04-26 15:43:18.296010] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77319 ] 00:24:50.165 [2024-04-26 15:43:18.434739] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:50.165 [2024-04-26 15:43:18.563812] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:24:50.165 15:43:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:24:50.165 15:43:19 -- common/autotest_common.sh@850 -- # return 0 00:24:50.165 15:43:19 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.FZ0eNWGbVp 00:24:50.165 [2024-04-26 15:43:19.661651] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:50.165 [2024-04-26 15:43:19.661768] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:24:50.165 TLSTESTn1 00:24:50.165 15:43:19 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:24:50.165 Running I/O for 10 seconds... 00:25:00.149 00:25:00.149 Latency(us) 00:25:00.149 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:00.149 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:00.149 Verification LBA range: start 0x0 length 0x2000 00:25:00.149 TLSTESTn1 : 10.03 3735.32 14.59 0.00 0.00 34190.65 9711.24 27405.96 00:25:00.149 =================================================================================================================== 00:25:00.149 Total : 3735.32 14.59 0.00 0.00 34190.65 9711.24 27405.96 00:25:00.149 0 00:25:00.149 15:43:29 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:00.149 15:43:29 -- target/tls.sh@45 -- # killprocess 77319 00:25:00.149 15:43:29 -- common/autotest_common.sh@936 -- # '[' -z 77319 ']' 00:25:00.149 15:43:29 -- common/autotest_common.sh@940 -- # kill -0 77319 00:25:00.149 15:43:29 -- common/autotest_common.sh@941 -- # uname 00:25:00.149 15:43:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:00.149 15:43:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77319 00:25:00.149 killing process with pid 77319 00:25:00.149 Received shutdown signal, test time was about 10.000000 seconds 00:25:00.149 00:25:00.149 Latency(us) 00:25:00.149 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:00.149 =================================================================================================================== 00:25:00.149 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:00.149 15:43:29 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:25:00.149 15:43:29 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:25:00.149 15:43:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77319' 00:25:00.149 15:43:29 -- common/autotest_common.sh@955 -- # kill 77319 00:25:00.149 [2024-04-26 15:43:29.941751] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:00.149 15:43:29 -- common/autotest_common.sh@960 -- # wait 77319 00:25:00.150 15:43:30 -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6RSIeK5tPi 00:25:00.150 15:43:30 -- common/autotest_common.sh@638 -- # local es=0 00:25:00.150 15:43:30 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6RSIeK5tPi 00:25:00.150 15:43:30 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:25:00.150 15:43:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:00.150 15:43:30 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:25:00.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:00.150 15:43:30 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:00.150 15:43:30 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.6RSIeK5tPi 00:25:00.150 15:43:30 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:00.150 15:43:30 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:00.150 15:43:30 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:00.150 15:43:30 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.6RSIeK5tPi' 00:25:00.150 15:43:30 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:00.150 15:43:30 -- target/tls.sh@28 -- # bdevperf_pid=77476 00:25:00.150 15:43:30 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:00.150 15:43:30 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:00.150 15:43:30 -- target/tls.sh@31 -- # waitforlisten 77476 /var/tmp/bdevperf.sock 00:25:00.150 15:43:30 -- common/autotest_common.sh@817 -- # '[' -z 77476 ']' 00:25:00.150 15:43:30 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:00.150 15:43:30 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:00.150 15:43:30 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:00.150 15:43:30 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:00.150 15:43:30 -- common/autotest_common.sh@10 -- # set +x 00:25:00.150 [2024-04-26 15:43:30.247206] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:25:00.150 [2024-04-26 15:43:30.247282] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77476 ] 00:25:00.150 [2024-04-26 15:43:30.381003] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:00.428 [2024-04-26 15:43:30.496562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:00.428 15:43:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:00.428 15:43:30 -- common/autotest_common.sh@850 -- # return 0 00:25:00.428 15:43:30 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.6RSIeK5tPi 00:25:00.687 [2024-04-26 15:43:30.874310] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:00.687 [2024-04-26 15:43:30.874422] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:25:00.687 [2024-04-26 15:43:30.885888] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:00.687 [2024-04-26 15:43:30.885958] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x233f9f0 (107): Transport endpoint is not connected 00:25:00.687 [2024-04-26 15:43:30.886948] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x233f9f0 (9): Bad file descriptor 00:25:00.687 [2024-04-26 15:43:30.887944] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:00.687 [2024-04-26 15:43:30.887969] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:25:00.687 [2024-04-26 15:43:30.887984] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:00.687 2024/04/26 15:43:30 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/tmp/tmp.6RSIeK5tPi subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:25:00.687 request: 00:25:00.687 { 00:25:00.687 "method": "bdev_nvme_attach_controller", 00:25:00.687 "params": { 00:25:00.687 "name": "TLSTEST", 00:25:00.687 "trtype": "tcp", 00:25:00.687 "traddr": "10.0.0.2", 00:25:00.687 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:00.687 "adrfam": "ipv4", 00:25:00.687 "trsvcid": "4420", 00:25:00.687 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:00.687 "psk": "/tmp/tmp.6RSIeK5tPi" 00:25:00.687 } 00:25:00.687 } 00:25:00.687 Got JSON-RPC error response 00:25:00.687 GoRPCClient: error on JSON-RPC call 00:25:00.687 15:43:30 -- target/tls.sh@36 -- # killprocess 77476 00:25:00.687 15:43:30 -- common/autotest_common.sh@936 -- # '[' -z 77476 ']' 00:25:00.687 15:43:30 -- common/autotest_common.sh@940 -- # kill -0 77476 00:25:00.687 15:43:30 -- common/autotest_common.sh@941 -- # uname 00:25:00.687 15:43:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:00.687 15:43:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77476 00:25:00.687 15:43:30 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:25:00.687 15:43:30 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:25:00.687 killing process with pid 77476 00:25:00.687 15:43:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77476' 00:25:00.687 Received shutdown signal, test time was about 10.000000 seconds 00:25:00.687 00:25:00.687 Latency(us) 00:25:00.687 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:00.687 =================================================================================================================== 00:25:00.687 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:00.687 15:43:30 -- common/autotest_common.sh@955 -- # kill 77476 00:25:00.687 [2024-04-26 15:43:30.936555] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:00.687 15:43:30 -- common/autotest_common.sh@960 -- # wait 77476 00:25:00.945 15:43:31 -- target/tls.sh@37 -- # return 1 00:25:00.945 15:43:31 -- common/autotest_common.sh@641 -- # es=1 00:25:00.945 15:43:31 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:25:00.945 15:43:31 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:25:00.945 15:43:31 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:25:00.945 15:43:31 -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.FZ0eNWGbVp 00:25:00.945 15:43:31 -- common/autotest_common.sh@638 -- # local es=0 00:25:00.945 15:43:31 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.FZ0eNWGbVp 00:25:00.945 15:43:31 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:25:00.945 15:43:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:00.945 15:43:31 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:25:00.945 15:43:31 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:00.945 15:43:31 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.FZ0eNWGbVp 00:25:00.945 15:43:31 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:00.945 15:43:31 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:00.945 15:43:31 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:25:00.945 15:43:31 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.FZ0eNWGbVp' 00:25:00.945 15:43:31 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:00.945 15:43:31 -- target/tls.sh@28 -- # bdevperf_pid=77508 00:25:00.945 15:43:31 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:00.945 15:43:31 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:00.945 15:43:31 -- target/tls.sh@31 -- # waitforlisten 77508 /var/tmp/bdevperf.sock 00:25:00.945 15:43:31 -- common/autotest_common.sh@817 -- # '[' -z 77508 ']' 00:25:00.945 15:43:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:00.945 15:43:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:00.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:00.945 15:43:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:00.945 15:43:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:00.945 15:43:31 -- common/autotest_common.sh@10 -- # set +x 00:25:01.203 [2024-04-26 15:43:31.240710] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:25:01.203 [2024-04-26 15:43:31.240810] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77508 ] 00:25:01.203 [2024-04-26 15:43:31.378555] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:01.203 [2024-04-26 15:43:31.491630] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:02.138 15:43:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:02.138 15:43:32 -- common/autotest_common.sh@850 -- # return 0 00:25:02.138 15:43:32 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.FZ0eNWGbVp 00:25:02.396 [2024-04-26 15:43:32.498370] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:02.396 [2024-04-26 15:43:32.498484] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:25:02.396 [2024-04-26 15:43:32.505955] tcp.c: 879:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:25:02.396 [2024-04-26 15:43:32.506014] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:25:02.396 [2024-04-26 15:43:32.506067] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:02.396 [2024-04-26 15:43:32.506104] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b729f0 (107): Transport endpoint is not connected 00:25:02.396 [2024-04-26 15:43:32.507095] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b729f0 (9): Bad file descriptor 00:25:02.396 [2024-04-26 15:43:32.508092] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:02.396 [2024-04-26 15:43:32.508132] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:25:02.396 [2024-04-26 15:43:32.508154] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:02.396 2024/04/26 15:43:32 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host2 name:TLSTEST psk:/tmp/tmp.FZ0eNWGbVp subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:25:02.396 request: 00:25:02.396 { 00:25:02.396 "method": "bdev_nvme_attach_controller", 00:25:02.396 "params": { 00:25:02.396 "name": "TLSTEST", 00:25:02.396 "trtype": "tcp", 00:25:02.396 "traddr": "10.0.0.2", 00:25:02.396 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:25:02.396 "adrfam": "ipv4", 00:25:02.396 "trsvcid": "4420", 00:25:02.396 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:02.396 "psk": "/tmp/tmp.FZ0eNWGbVp" 00:25:02.396 } 00:25:02.396 } 00:25:02.396 Got JSON-RPC error response 00:25:02.396 GoRPCClient: error on JSON-RPC call 00:25:02.396 15:43:32 -- target/tls.sh@36 -- # killprocess 77508 00:25:02.396 15:43:32 -- common/autotest_common.sh@936 -- # '[' -z 77508 ']' 00:25:02.397 15:43:32 -- common/autotest_common.sh@940 -- # kill -0 77508 00:25:02.397 15:43:32 -- common/autotest_common.sh@941 -- # uname 00:25:02.397 15:43:32 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:02.397 15:43:32 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77508 00:25:02.397 15:43:32 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:25:02.397 killing process with pid 77508 00:25:02.397 15:43:32 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:25:02.397 15:43:32 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77508' 00:25:02.397 Received shutdown signal, test time was about 10.000000 seconds 00:25:02.397 00:25:02.397 Latency(us) 00:25:02.397 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:02.397 =================================================================================================================== 00:25:02.397 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:02.397 15:43:32 -- common/autotest_common.sh@955 -- # kill 77508 00:25:02.397 [2024-04-26 15:43:32.555030] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:02.397 15:43:32 -- common/autotest_common.sh@960 -- # wait 77508 00:25:02.655 15:43:32 -- target/tls.sh@37 -- # return 1 00:25:02.655 15:43:32 -- common/autotest_common.sh@641 -- # es=1 00:25:02.655 15:43:32 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:25:02.655 15:43:32 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:25:02.655 15:43:32 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:25:02.655 15:43:32 -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.FZ0eNWGbVp 00:25:02.655 15:43:32 -- common/autotest_common.sh@638 -- # local es=0 00:25:02.655 15:43:32 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.FZ0eNWGbVp 00:25:02.655 15:43:32 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:25:02.655 15:43:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:02.655 15:43:32 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:25:02.655 15:43:32 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:02.655 15:43:32 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.FZ0eNWGbVp 00:25:02.655 15:43:32 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:02.655 15:43:32 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:25:02.655 15:43:32 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:02.655 15:43:32 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.FZ0eNWGbVp' 00:25:02.655 15:43:32 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:02.655 15:43:32 -- target/tls.sh@28 -- # bdevperf_pid=77548 00:25:02.655 15:43:32 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:02.655 15:43:32 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:02.655 15:43:32 -- target/tls.sh@31 -- # waitforlisten 77548 /var/tmp/bdevperf.sock 00:25:02.655 15:43:32 -- common/autotest_common.sh@817 -- # '[' -z 77548 ']' 00:25:02.655 15:43:32 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:02.655 15:43:32 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:02.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:02.655 15:43:32 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:02.655 15:43:32 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:02.655 15:43:32 -- common/autotest_common.sh@10 -- # set +x 00:25:02.655 [2024-04-26 15:43:32.855504] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:25:02.655 [2024-04-26 15:43:32.855598] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77548 ] 00:25:02.913 [2024-04-26 15:43:32.988517] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:02.913 [2024-04-26 15:43:33.104514] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:03.847 15:43:33 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:03.847 15:43:33 -- common/autotest_common.sh@850 -- # return 0 00:25:03.847 15:43:33 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.FZ0eNWGbVp 00:25:03.847 [2024-04-26 15:43:34.110936] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:03.847 [2024-04-26 15:43:34.111053] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:25:03.847 [2024-04-26 15:43:34.116184] tcp.c: 879:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:25:03.847 [2024-04-26 15:43:34.116218] posix.c: 588:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:25:03.847 [2024-04-26 15:43:34.116271] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:03.847 [2024-04-26 15:43:34.116661] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9079f0 (107): Transport endpoint is not connected 00:25:03.847 [2024-04-26 15:43:34.117648] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9079f0 (9): Bad file descriptor 00:25:03.847 [2024-04-26 15:43:34.118644] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:25:03.847 [2024-04-26 15:43:34.118667] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:25:03.847 [2024-04-26 15:43:34.118696] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:25:03.847 2024/04/26 15:43:34 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/tmp/tmp.FZ0eNWGbVp subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:25:03.847 request: 00:25:03.847 { 00:25:03.847 "method": "bdev_nvme_attach_controller", 00:25:03.847 "params": { 00:25:03.847 "name": "TLSTEST", 00:25:03.847 "trtype": "tcp", 00:25:03.847 "traddr": "10.0.0.2", 00:25:03.847 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:03.847 "adrfam": "ipv4", 00:25:03.847 "trsvcid": "4420", 00:25:03.847 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:25:03.847 "psk": "/tmp/tmp.FZ0eNWGbVp" 00:25:03.847 } 00:25:03.847 } 00:25:03.847 Got JSON-RPC error response 00:25:03.847 GoRPCClient: error on JSON-RPC call 00:25:03.847 15:43:34 -- target/tls.sh@36 -- # killprocess 77548 00:25:04.106 15:43:34 -- common/autotest_common.sh@936 -- # '[' -z 77548 ']' 00:25:04.106 15:43:34 -- common/autotest_common.sh@940 -- # kill -0 77548 00:25:04.106 15:43:34 -- common/autotest_common.sh@941 -- # uname 00:25:04.106 15:43:34 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:04.106 15:43:34 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77548 00:25:04.106 15:43:34 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:25:04.106 killing process with pid 77548 00:25:04.106 15:43:34 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:25:04.106 15:43:34 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77548' 00:25:04.106 Received shutdown signal, test time was about 10.000000 seconds 00:25:04.106 00:25:04.106 Latency(us) 00:25:04.106 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:04.106 =================================================================================================================== 00:25:04.106 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:04.106 15:43:34 -- common/autotest_common.sh@955 -- # kill 77548 00:25:04.106 [2024-04-26 15:43:34.167423] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:04.106 15:43:34 -- common/autotest_common.sh@960 -- # wait 77548 00:25:04.366 15:43:34 -- target/tls.sh@37 -- # return 1 00:25:04.366 15:43:34 -- common/autotest_common.sh@641 -- # es=1 00:25:04.366 15:43:34 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:25:04.366 15:43:34 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:25:04.366 15:43:34 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:25:04.366 15:43:34 -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:25:04.366 15:43:34 -- common/autotest_common.sh@638 -- # local es=0 00:25:04.366 15:43:34 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:25:04.366 15:43:34 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:25:04.366 15:43:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:04.366 15:43:34 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:25:04.366 15:43:34 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:04.366 15:43:34 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:25:04.366 15:43:34 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:04.366 15:43:34 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:04.366 15:43:34 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:04.366 15:43:34 -- target/tls.sh@23 -- # psk= 00:25:04.366 15:43:34 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:04.366 15:43:34 -- target/tls.sh@28 -- # bdevperf_pid=77599 00:25:04.366 15:43:34 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:04.366 15:43:34 -- target/tls.sh@31 -- # waitforlisten 77599 /var/tmp/bdevperf.sock 00:25:04.366 15:43:34 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:04.366 15:43:34 -- common/autotest_common.sh@817 -- # '[' -z 77599 ']' 00:25:04.366 15:43:34 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:04.366 15:43:34 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:04.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:04.367 15:43:34 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:04.367 15:43:34 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:04.367 15:43:34 -- common/autotest_common.sh@10 -- # set +x 00:25:04.367 [2024-04-26 15:43:34.479286] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:25:04.367 [2024-04-26 15:43:34.479387] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77599 ] 00:25:04.367 [2024-04-26 15:43:34.617264] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:04.634 [2024-04-26 15:43:34.736293] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:05.202 15:43:35 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:05.202 15:43:35 -- common/autotest_common.sh@850 -- # return 0 00:25:05.202 15:43:35 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:25:05.461 [2024-04-26 15:43:35.706537] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:25:05.461 [2024-04-26 15:43:35.708525] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x99edc0 (9): Bad file descriptor 00:25:05.461 [2024-04-26 15:43:35.709520] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:25:05.461 [2024-04-26 15:43:35.709544] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:25:05.461 [2024-04-26 15:43:35.709558] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:25:05.461 2024/04/26 15:43:35 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:25:05.461 request: 00:25:05.461 { 00:25:05.461 "method": "bdev_nvme_attach_controller", 00:25:05.461 "params": { 00:25:05.461 "name": "TLSTEST", 00:25:05.461 "trtype": "tcp", 00:25:05.461 "traddr": "10.0.0.2", 00:25:05.461 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:05.461 "adrfam": "ipv4", 00:25:05.461 "trsvcid": "4420", 00:25:05.461 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:25:05.461 } 00:25:05.461 } 00:25:05.461 Got JSON-RPC error response 00:25:05.461 GoRPCClient: error on JSON-RPC call 00:25:05.461 15:43:35 -- target/tls.sh@36 -- # killprocess 77599 00:25:05.461 15:43:35 -- common/autotest_common.sh@936 -- # '[' -z 77599 ']' 00:25:05.461 15:43:35 -- common/autotest_common.sh@940 -- # kill -0 77599 00:25:05.461 15:43:35 -- common/autotest_common.sh@941 -- # uname 00:25:05.461 15:43:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:05.461 15:43:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77599 00:25:05.461 killing process with pid 77599 00:25:05.461 Received shutdown signal, test time was about 10.000000 seconds 00:25:05.461 00:25:05.461 Latency(us) 00:25:05.461 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:05.461 =================================================================================================================== 00:25:05.461 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:05.461 15:43:35 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:25:05.461 15:43:35 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:25:05.461 15:43:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77599' 00:25:05.461 15:43:35 -- common/autotest_common.sh@955 -- # kill 77599 00:25:05.461 15:43:35 -- common/autotest_common.sh@960 -- # wait 77599 00:25:05.720 15:43:35 -- target/tls.sh@37 -- # return 1 00:25:05.720 15:43:35 -- common/autotest_common.sh@641 -- # es=1 00:25:05.720 15:43:35 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:25:05.720 15:43:35 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:25:05.720 15:43:35 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:25:05.720 15:43:36 -- target/tls.sh@158 -- # killprocess 76959 00:25:05.720 15:43:36 -- common/autotest_common.sh@936 -- # '[' -z 76959 ']' 00:25:05.720 15:43:36 -- common/autotest_common.sh@940 -- # kill -0 76959 00:25:05.720 15:43:36 -- common/autotest_common.sh@941 -- # uname 00:25:05.720 15:43:36 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:05.720 15:43:36 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 76959 00:25:05.979 killing process with pid 76959 00:25:05.979 15:43:36 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:05.979 15:43:36 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:05.979 15:43:36 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 76959' 00:25:05.979 15:43:36 -- common/autotest_common.sh@955 -- # kill 76959 00:25:05.979 [2024-04-26 15:43:36.023705] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:05.979 15:43:36 -- common/autotest_common.sh@960 -- # wait 76959 00:25:06.238 15:43:36 -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:25:06.238 15:43:36 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:25:06.238 15:43:36 -- nvmf/common.sh@691 -- # local prefix key digest 00:25:06.238 15:43:36 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:25:06.238 15:43:36 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:25:06.238 15:43:36 -- nvmf/common.sh@693 -- # digest=2 00:25:06.238 15:43:36 -- nvmf/common.sh@694 -- # python - 00:25:06.238 15:43:36 -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:25:06.238 15:43:36 -- target/tls.sh@160 -- # mktemp 00:25:06.238 15:43:36 -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.fBJPVbgFTx 00:25:06.238 15:43:36 -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:25:06.238 15:43:36 -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.fBJPVbgFTx 00:25:06.238 15:43:36 -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:25:06.238 15:43:36 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:06.238 15:43:36 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:06.238 15:43:36 -- common/autotest_common.sh@10 -- # set +x 00:25:06.238 15:43:36 -- nvmf/common.sh@470 -- # nvmfpid=77655 00:25:06.238 15:43:36 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:06.238 15:43:36 -- nvmf/common.sh@471 -- # waitforlisten 77655 00:25:06.238 15:43:36 -- common/autotest_common.sh@817 -- # '[' -z 77655 ']' 00:25:06.238 15:43:36 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:06.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:06.238 15:43:36 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:06.238 15:43:36 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:06.238 15:43:36 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:06.238 15:43:36 -- common/autotest_common.sh@10 -- # set +x 00:25:06.238 [2024-04-26 15:43:36.416505] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:25:06.238 [2024-04-26 15:43:36.416595] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:06.496 [2024-04-26 15:43:36.553733] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:06.496 [2024-04-26 15:43:36.673136] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:06.496 [2024-04-26 15:43:36.673215] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:06.496 [2024-04-26 15:43:36.673226] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:06.496 [2024-04-26 15:43:36.673234] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:06.496 [2024-04-26 15:43:36.673241] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:06.496 [2024-04-26 15:43:36.673270] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:07.063 15:43:37 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:07.063 15:43:37 -- common/autotest_common.sh@850 -- # return 0 00:25:07.063 15:43:37 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:07.063 15:43:37 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:07.063 15:43:37 -- common/autotest_common.sh@10 -- # set +x 00:25:07.321 15:43:37 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:07.321 15:43:37 -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.fBJPVbgFTx 00:25:07.321 15:43:37 -- target/tls.sh@49 -- # local key=/tmp/tmp.fBJPVbgFTx 00:25:07.321 15:43:37 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:07.580 [2024-04-26 15:43:37.657104] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:07.580 15:43:37 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:07.839 15:43:37 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:08.098 [2024-04-26 15:43:38.201256] tcp.c: 926:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:08.098 [2024-04-26 15:43:38.201507] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:08.098 15:43:38 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:08.364 malloc0 00:25:08.364 15:43:38 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:08.641 15:43:38 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.fBJPVbgFTx 00:25:08.900 [2024-04-26 15:43:39.017842] tcp.c:3655:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:25:08.900 15:43:39 -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.fBJPVbgFTx 00:25:08.900 15:43:39 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:08.900 15:43:39 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:08.900 15:43:39 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:08.900 15:43:39 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.fBJPVbgFTx' 00:25:08.900 15:43:39 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:08.900 15:43:39 -- target/tls.sh@28 -- # bdevperf_pid=77757 00:25:08.900 15:43:39 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:08.900 15:43:39 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:08.900 15:43:39 -- target/tls.sh@31 -- # waitforlisten 77757 /var/tmp/bdevperf.sock 00:25:08.900 15:43:39 -- common/autotest_common.sh@817 -- # '[' -z 77757 ']' 00:25:08.900 15:43:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:08.900 15:43:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:08.900 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:08.900 15:43:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:08.900 15:43:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:08.900 15:43:39 -- common/autotest_common.sh@10 -- # set +x 00:25:08.900 [2024-04-26 15:43:39.121258] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:25:08.900 [2024-04-26 15:43:39.121410] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77757 ] 00:25:09.158 [2024-04-26 15:43:39.267686] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:09.158 [2024-04-26 15:43:39.399523] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:10.093 15:43:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:10.093 15:43:40 -- common/autotest_common.sh@850 -- # return 0 00:25:10.093 15:43:40 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.fBJPVbgFTx 00:25:10.093 [2024-04-26 15:43:40.225030] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:10.093 [2024-04-26 15:43:40.225586] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:25:10.093 TLSTESTn1 00:25:10.093 15:43:40 -- target/tls.sh@41 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:25:10.351 Running I/O for 10 seconds... 00:25:20.324 00:25:20.324 Latency(us) 00:25:20.324 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:20.324 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:20.324 Verification LBA range: start 0x0 length 0x2000 00:25:20.324 TLSTESTn1 : 10.02 3758.13 14.68 0.00 0.00 33987.90 7685.59 26452.71 00:25:20.324 =================================================================================================================== 00:25:20.324 Total : 3758.13 14.68 0.00 0.00 33987.90 7685.59 26452.71 00:25:20.324 0 00:25:20.324 15:43:50 -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:20.324 15:43:50 -- target/tls.sh@45 -- # killprocess 77757 00:25:20.324 15:43:50 -- common/autotest_common.sh@936 -- # '[' -z 77757 ']' 00:25:20.324 15:43:50 -- common/autotest_common.sh@940 -- # kill -0 77757 00:25:20.324 15:43:50 -- common/autotest_common.sh@941 -- # uname 00:25:20.324 15:43:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:20.324 15:43:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77757 00:25:20.324 killing process with pid 77757 00:25:20.324 Received shutdown signal, test time was about 10.000000 seconds 00:25:20.324 00:25:20.324 Latency(us) 00:25:20.324 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:20.324 =================================================================================================================== 00:25:20.324 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:20.324 15:43:50 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:25:20.324 15:43:50 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:25:20.324 15:43:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77757' 00:25:20.324 15:43:50 -- common/autotest_common.sh@955 -- # kill 77757 00:25:20.324 [2024-04-26 15:43:50.482833] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:20.324 15:43:50 -- common/autotest_common.sh@960 -- # wait 77757 00:25:20.582 15:43:50 -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.fBJPVbgFTx 00:25:20.582 15:43:50 -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.fBJPVbgFTx 00:25:20.582 15:43:50 -- common/autotest_common.sh@638 -- # local es=0 00:25:20.582 15:43:50 -- common/autotest_common.sh@640 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.fBJPVbgFTx 00:25:20.582 15:43:50 -- common/autotest_common.sh@626 -- # local arg=run_bdevperf 00:25:20.582 15:43:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:20.582 15:43:50 -- common/autotest_common.sh@630 -- # type -t run_bdevperf 00:25:20.582 15:43:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:20.582 15:43:50 -- common/autotest_common.sh@641 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.fBJPVbgFTx 00:25:20.582 15:43:50 -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:25:20.582 15:43:50 -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:25:20.582 15:43:50 -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:25:20.582 15:43:50 -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.fBJPVbgFTx' 00:25:20.582 15:43:50 -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:20.582 15:43:50 -- target/tls.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:20.582 15:43:50 -- target/tls.sh@28 -- # bdevperf_pid=77910 00:25:20.582 15:43:50 -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:20.582 15:43:50 -- target/tls.sh@31 -- # waitforlisten 77910 /var/tmp/bdevperf.sock 00:25:20.582 15:43:50 -- common/autotest_common.sh@817 -- # '[' -z 77910 ']' 00:25:20.582 15:43:50 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:20.582 15:43:50 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:20.582 15:43:50 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:20.582 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:20.582 15:43:50 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:20.582 15:43:50 -- common/autotest_common.sh@10 -- # set +x 00:25:20.582 [2024-04-26 15:43:50.800106] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:25:20.582 [2024-04-26 15:43:50.800244] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77910 ] 00:25:20.840 [2024-04-26 15:43:50.939541] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:20.840 [2024-04-26 15:43:51.058147] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:21.783 15:43:51 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:21.783 15:43:51 -- common/autotest_common.sh@850 -- # return 0 00:25:21.783 15:43:51 -- target/tls.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.fBJPVbgFTx 00:25:21.783 [2024-04-26 15:43:52.066944] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:21.783 [2024-04-26 15:43:52.067023] bdev_nvme.c:6067:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:25:21.783 [2024-04-26 15:43:52.067035] bdev_nvme.c:6176:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.fBJPVbgFTx 00:25:21.783 2024/04/26 15:43:52 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host1 name:TLSTEST psk:/tmp/tmp.fBJPVbgFTx subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-1 Msg=Operation not permitted 00:25:21.783 request: 00:25:21.783 { 00:25:21.783 "method": "bdev_nvme_attach_controller", 00:25:21.783 "params": { 00:25:21.783 "name": "TLSTEST", 00:25:21.783 "trtype": "tcp", 00:25:21.783 "traddr": "10.0.0.2", 00:25:21.783 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:21.783 "adrfam": "ipv4", 00:25:21.783 "trsvcid": "4420", 00:25:21.783 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:21.783 "psk": "/tmp/tmp.fBJPVbgFTx" 00:25:21.783 } 00:25:21.783 } 00:25:21.783 Got JSON-RPC error response 00:25:21.783 GoRPCClient: error on JSON-RPC call 00:25:22.042 15:43:52 -- target/tls.sh@36 -- # killprocess 77910 00:25:22.042 15:43:52 -- common/autotest_common.sh@936 -- # '[' -z 77910 ']' 00:25:22.042 15:43:52 -- common/autotest_common.sh@940 -- # kill -0 77910 00:25:22.042 15:43:52 -- common/autotest_common.sh@941 -- # uname 00:25:22.042 15:43:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:22.042 15:43:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77910 00:25:22.042 15:43:52 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:25:22.042 15:43:52 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:25:22.042 15:43:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77910' 00:25:22.042 killing process with pid 77910 00:25:22.042 Received shutdown signal, test time was about 10.000000 seconds 00:25:22.042 00:25:22.042 Latency(us) 00:25:22.042 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:22.042 =================================================================================================================== 00:25:22.042 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:22.042 15:43:52 -- common/autotest_common.sh@955 -- # kill 77910 00:25:22.042 15:43:52 -- common/autotest_common.sh@960 -- # wait 77910 00:25:22.301 15:43:52 -- target/tls.sh@37 -- # return 1 00:25:22.301 15:43:52 -- common/autotest_common.sh@641 -- # es=1 00:25:22.301 15:43:52 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:25:22.301 15:43:52 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:25:22.301 15:43:52 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:25:22.301 15:43:52 -- target/tls.sh@174 -- # killprocess 77655 00:25:22.301 15:43:52 -- common/autotest_common.sh@936 -- # '[' -z 77655 ']' 00:25:22.301 15:43:52 -- common/autotest_common.sh@940 -- # kill -0 77655 00:25:22.301 15:43:52 -- common/autotest_common.sh@941 -- # uname 00:25:22.301 15:43:52 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:22.301 15:43:52 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77655 00:25:22.301 killing process with pid 77655 00:25:22.301 15:43:52 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:22.301 15:43:52 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:22.301 15:43:52 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77655' 00:25:22.301 15:43:52 -- common/autotest_common.sh@955 -- # kill 77655 00:25:22.301 [2024-04-26 15:43:52.394052] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:22.301 15:43:52 -- common/autotest_common.sh@960 -- # wait 77655 00:25:22.559 15:43:52 -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:25:22.559 15:43:52 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:22.559 15:43:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:22.559 15:43:52 -- common/autotest_common.sh@10 -- # set +x 00:25:22.559 15:43:52 -- nvmf/common.sh@470 -- # nvmfpid=77966 00:25:22.559 15:43:52 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:22.559 15:43:52 -- nvmf/common.sh@471 -- # waitforlisten 77966 00:25:22.559 15:43:52 -- common/autotest_common.sh@817 -- # '[' -z 77966 ']' 00:25:22.559 15:43:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:22.559 15:43:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:22.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:22.559 15:43:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:22.559 15:43:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:22.559 15:43:52 -- common/autotest_common.sh@10 -- # set +x 00:25:22.559 [2024-04-26 15:43:52.740050] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:25:22.559 [2024-04-26 15:43:52.740238] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:22.816 [2024-04-26 15:43:52.882962] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:22.816 [2024-04-26 15:43:53.004147] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:22.816 [2024-04-26 15:43:53.004227] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:22.816 [2024-04-26 15:43:53.004240] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:22.816 [2024-04-26 15:43:53.004249] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:22.816 [2024-04-26 15:43:53.004256] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:22.816 [2024-04-26 15:43:53.004296] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:23.750 15:43:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:23.750 15:43:53 -- common/autotest_common.sh@850 -- # return 0 00:25:23.750 15:43:53 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:23.750 15:43:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:23.750 15:43:53 -- common/autotest_common.sh@10 -- # set +x 00:25:23.750 15:43:53 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:23.750 15:43:53 -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.fBJPVbgFTx 00:25:23.750 15:43:53 -- common/autotest_common.sh@638 -- # local es=0 00:25:23.750 15:43:53 -- common/autotest_common.sh@640 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.fBJPVbgFTx 00:25:23.750 15:43:53 -- common/autotest_common.sh@626 -- # local arg=setup_nvmf_tgt 00:25:23.750 15:43:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:23.750 15:43:53 -- common/autotest_common.sh@630 -- # type -t setup_nvmf_tgt 00:25:23.750 15:43:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:23.750 15:43:53 -- common/autotest_common.sh@641 -- # setup_nvmf_tgt /tmp/tmp.fBJPVbgFTx 00:25:23.750 15:43:53 -- target/tls.sh@49 -- # local key=/tmp/tmp.fBJPVbgFTx 00:25:23.750 15:43:53 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:24.008 [2024-04-26 15:43:54.091516] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:24.008 15:43:54 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:24.266 15:43:54 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:24.524 [2024-04-26 15:43:54.679654] tcp.c: 926:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:24.524 [2024-04-26 15:43:54.679883] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:24.524 15:43:54 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:24.782 malloc0 00:25:24.782 15:43:55 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:25.040 15:43:55 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.fBJPVbgFTx 00:25:25.299 [2024-04-26 15:43:55.563243] tcp.c:3565:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:25:25.299 [2024-04-26 15:43:55.563291] tcp.c:3651:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:25:25.299 [2024-04-26 15:43:55.563317] subsystem.c: 971:spdk_nvmf_subsystem_add_host: *ERROR*: Unable to add host to TCP transport 00:25:25.299 2024/04/26 15:43:55 error on JSON-RPC call, method: nvmf_subsystem_add_host, params: map[host:nqn.2016-06.io.spdk:host1 nqn:nqn.2016-06.io.spdk:cnode1 psk:/tmp/tmp.fBJPVbgFTx], err: error received for nvmf_subsystem_add_host method, err: Code=-32603 Msg=Internal error 00:25:25.299 request: 00:25:25.299 { 00:25:25.299 "method": "nvmf_subsystem_add_host", 00:25:25.299 "params": { 00:25:25.299 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:25.299 "host": "nqn.2016-06.io.spdk:host1", 00:25:25.299 "psk": "/tmp/tmp.fBJPVbgFTx" 00:25:25.299 } 00:25:25.299 } 00:25:25.299 Got JSON-RPC error response 00:25:25.299 GoRPCClient: error on JSON-RPC call 00:25:25.299 15:43:55 -- common/autotest_common.sh@641 -- # es=1 00:25:25.299 15:43:55 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:25:25.299 15:43:55 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:25:25.299 15:43:55 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:25:25.299 15:43:55 -- target/tls.sh@180 -- # killprocess 77966 00:25:25.299 15:43:55 -- common/autotest_common.sh@936 -- # '[' -z 77966 ']' 00:25:25.299 15:43:55 -- common/autotest_common.sh@940 -- # kill -0 77966 00:25:25.299 15:43:55 -- common/autotest_common.sh@941 -- # uname 00:25:25.558 15:43:55 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:25.558 15:43:55 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 77966 00:25:25.558 15:43:55 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:25.558 killing process with pid 77966 00:25:25.558 15:43:55 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:25.558 15:43:55 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 77966' 00:25:25.558 15:43:55 -- common/autotest_common.sh@955 -- # kill 77966 00:25:25.558 15:43:55 -- common/autotest_common.sh@960 -- # wait 77966 00:25:25.816 15:43:55 -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.fBJPVbgFTx 00:25:25.816 15:43:55 -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:25:25.816 15:43:55 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:25.816 15:43:55 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:25.816 15:43:55 -- common/autotest_common.sh@10 -- # set +x 00:25:25.816 15:43:55 -- nvmf/common.sh@470 -- # nvmfpid=78080 00:25:25.816 15:43:55 -- nvmf/common.sh@471 -- # waitforlisten 78080 00:25:25.816 15:43:55 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:25.816 15:43:55 -- common/autotest_common.sh@817 -- # '[' -z 78080 ']' 00:25:25.816 15:43:55 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:25.816 15:43:55 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:25.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:25.816 15:43:55 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:25.816 15:43:55 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:25.816 15:43:55 -- common/autotest_common.sh@10 -- # set +x 00:25:25.816 [2024-04-26 15:43:55.947349] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:25:25.816 [2024-04-26 15:43:55.947451] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:25.816 [2024-04-26 15:43:56.084525] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:26.073 [2024-04-26 15:43:56.207840] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:26.073 [2024-04-26 15:43:56.207902] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:26.073 [2024-04-26 15:43:56.207915] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:26.073 [2024-04-26 15:43:56.207924] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:26.073 [2024-04-26 15:43:56.207931] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:26.073 [2024-04-26 15:43:56.207960] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:27.005 15:43:56 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:27.005 15:43:56 -- common/autotest_common.sh@850 -- # return 0 00:25:27.006 15:43:56 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:27.006 15:43:56 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:27.006 15:43:56 -- common/autotest_common.sh@10 -- # set +x 00:25:27.006 15:43:57 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:27.006 15:43:57 -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.fBJPVbgFTx 00:25:27.006 15:43:57 -- target/tls.sh@49 -- # local key=/tmp/tmp.fBJPVbgFTx 00:25:27.006 15:43:57 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:27.006 [2024-04-26 15:43:57.275122] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:27.006 15:43:57 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:27.570 15:43:57 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:27.570 [2024-04-26 15:43:57.787288] tcp.c: 926:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:27.570 [2024-04-26 15:43:57.787730] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:27.570 15:43:57 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:27.827 malloc0 00:25:27.827 15:43:58 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:28.389 15:43:58 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.fBJPVbgFTx 00:25:28.389 [2024-04-26 15:43:58.658790] tcp.c:3655:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:25:28.389 15:43:58 -- target/tls.sh@188 -- # bdevperf_pid=78184 00:25:28.389 15:43:58 -- target/tls.sh@187 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:25:28.389 15:43:58 -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:28.389 15:43:58 -- target/tls.sh@191 -- # waitforlisten 78184 /var/tmp/bdevperf.sock 00:25:28.389 15:43:58 -- common/autotest_common.sh@817 -- # '[' -z 78184 ']' 00:25:28.389 15:43:58 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:28.389 15:43:58 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:28.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:28.389 15:43:58 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:28.389 15:43:58 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:28.389 15:43:58 -- common/autotest_common.sh@10 -- # set +x 00:25:28.647 [2024-04-26 15:43:58.734625] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:25:28.647 [2024-04-26 15:43:58.735248] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78184 ] 00:25:28.647 [2024-04-26 15:43:58.873885] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:28.903 [2024-04-26 15:43:59.018805] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:29.465 15:43:59 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:29.465 15:43:59 -- common/autotest_common.sh@850 -- # return 0 00:25:29.465 15:43:59 -- target/tls.sh@192 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.fBJPVbgFTx 00:25:29.722 [2024-04-26 15:43:59.979192] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:29.722 [2024-04-26 15:43:59.979750] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:25:29.979 TLSTESTn1 00:25:29.979 15:44:00 -- target/tls.sh@196 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:25:30.237 15:44:00 -- target/tls.sh@196 -- # tgtconf='{ 00:25:30.237 "subsystems": [ 00:25:30.237 { 00:25:30.237 "subsystem": "keyring", 00:25:30.237 "config": [] 00:25:30.237 }, 00:25:30.237 { 00:25:30.237 "subsystem": "iobuf", 00:25:30.237 "config": [ 00:25:30.237 { 00:25:30.237 "method": "iobuf_set_options", 00:25:30.237 "params": { 00:25:30.237 "large_bufsize": 135168, 00:25:30.237 "large_pool_count": 1024, 00:25:30.237 "small_bufsize": 8192, 00:25:30.237 "small_pool_count": 8192 00:25:30.237 } 00:25:30.237 } 00:25:30.237 ] 00:25:30.237 }, 00:25:30.237 { 00:25:30.237 "subsystem": "sock", 00:25:30.237 "config": [ 00:25:30.237 { 00:25:30.237 "method": "sock_impl_set_options", 00:25:30.237 "params": { 00:25:30.237 "enable_ktls": false, 00:25:30.237 "enable_placement_id": 0, 00:25:30.237 "enable_quickack": false, 00:25:30.237 "enable_recv_pipe": true, 00:25:30.237 "enable_zerocopy_send_client": false, 00:25:30.237 "enable_zerocopy_send_server": true, 00:25:30.237 "impl_name": "posix", 00:25:30.237 "recv_buf_size": 2097152, 00:25:30.237 "send_buf_size": 2097152, 00:25:30.237 "tls_version": 0, 00:25:30.237 "zerocopy_threshold": 0 00:25:30.237 } 00:25:30.237 }, 00:25:30.237 { 00:25:30.237 "method": "sock_impl_set_options", 00:25:30.238 "params": { 00:25:30.238 "enable_ktls": false, 00:25:30.238 "enable_placement_id": 0, 00:25:30.238 "enable_quickack": false, 00:25:30.238 "enable_recv_pipe": true, 00:25:30.238 "enable_zerocopy_send_client": false, 00:25:30.238 "enable_zerocopy_send_server": true, 00:25:30.238 "impl_name": "ssl", 00:25:30.238 "recv_buf_size": 4096, 00:25:30.238 "send_buf_size": 4096, 00:25:30.238 "tls_version": 0, 00:25:30.238 "zerocopy_threshold": 0 00:25:30.238 } 00:25:30.238 } 00:25:30.238 ] 00:25:30.238 }, 00:25:30.238 { 00:25:30.238 "subsystem": "vmd", 00:25:30.238 "config": [] 00:25:30.238 }, 00:25:30.238 { 00:25:30.238 "subsystem": "accel", 00:25:30.238 "config": [ 00:25:30.238 { 00:25:30.238 "method": "accel_set_options", 00:25:30.238 "params": { 00:25:30.238 "buf_count": 2048, 00:25:30.238 "large_cache_size": 16, 00:25:30.238 "sequence_count": 2048, 00:25:30.238 "small_cache_size": 128, 00:25:30.238 "task_count": 2048 00:25:30.238 } 00:25:30.238 } 00:25:30.238 ] 00:25:30.238 }, 00:25:30.238 { 00:25:30.238 "subsystem": "bdev", 00:25:30.238 "config": [ 00:25:30.238 { 00:25:30.238 "method": "bdev_set_options", 00:25:30.238 "params": { 00:25:30.238 "bdev_auto_examine": true, 00:25:30.238 "bdev_io_cache_size": 256, 00:25:30.238 "bdev_io_pool_size": 65535, 00:25:30.238 "iobuf_large_cache_size": 16, 00:25:30.238 "iobuf_small_cache_size": 128 00:25:30.238 } 00:25:30.238 }, 00:25:30.238 { 00:25:30.238 "method": "bdev_raid_set_options", 00:25:30.238 "params": { 00:25:30.238 "process_window_size_kb": 1024 00:25:30.238 } 00:25:30.238 }, 00:25:30.238 { 00:25:30.238 "method": "bdev_iscsi_set_options", 00:25:30.238 "params": { 00:25:30.238 "timeout_sec": 30 00:25:30.238 } 00:25:30.238 }, 00:25:30.238 { 00:25:30.238 "method": "bdev_nvme_set_options", 00:25:30.238 "params": { 00:25:30.238 "action_on_timeout": "none", 00:25:30.238 "allow_accel_sequence": false, 00:25:30.238 "arbitration_burst": 0, 00:25:30.238 "bdev_retry_count": 3, 00:25:30.238 "ctrlr_loss_timeout_sec": 0, 00:25:30.238 "delay_cmd_submit": true, 00:25:30.238 "dhchap_dhgroups": [ 00:25:30.238 "null", 00:25:30.238 "ffdhe2048", 00:25:30.238 "ffdhe3072", 00:25:30.238 "ffdhe4096", 00:25:30.238 "ffdhe6144", 00:25:30.238 "ffdhe8192" 00:25:30.238 ], 00:25:30.238 "dhchap_digests": [ 00:25:30.238 "sha256", 00:25:30.238 "sha384", 00:25:30.238 "sha512" 00:25:30.238 ], 00:25:30.238 "disable_auto_failback": false, 00:25:30.238 "fast_io_fail_timeout_sec": 0, 00:25:30.238 "generate_uuids": false, 00:25:30.238 "high_priority_weight": 0, 00:25:30.238 "io_path_stat": false, 00:25:30.238 "io_queue_requests": 0, 00:25:30.238 "keep_alive_timeout_ms": 10000, 00:25:30.238 "low_priority_weight": 0, 00:25:30.238 "medium_priority_weight": 0, 00:25:30.238 "nvme_adminq_poll_period_us": 10000, 00:25:30.238 "nvme_error_stat": false, 00:25:30.238 "nvme_ioq_poll_period_us": 0, 00:25:30.238 "rdma_cm_event_timeout_ms": 0, 00:25:30.238 "rdma_max_cq_size": 0, 00:25:30.238 "rdma_srq_size": 0, 00:25:30.238 "reconnect_delay_sec": 0, 00:25:30.238 "timeout_admin_us": 0, 00:25:30.238 "timeout_us": 0, 00:25:30.238 "transport_ack_timeout": 0, 00:25:30.238 "transport_retry_count": 4, 00:25:30.238 "transport_tos": 0 00:25:30.238 } 00:25:30.238 }, 00:25:30.238 { 00:25:30.238 "method": "bdev_nvme_set_hotplug", 00:25:30.238 "params": { 00:25:30.238 "enable": false, 00:25:30.238 "period_us": 100000 00:25:30.238 } 00:25:30.238 }, 00:25:30.238 { 00:25:30.238 "method": "bdev_malloc_create", 00:25:30.238 "params": { 00:25:30.238 "block_size": 4096, 00:25:30.238 "name": "malloc0", 00:25:30.238 "num_blocks": 8192, 00:25:30.238 "optimal_io_boundary": 0, 00:25:30.238 "physical_block_size": 4096, 00:25:30.238 "uuid": "77686754-6d5c-4f31-ad61-bf2a6a4c413e" 00:25:30.238 } 00:25:30.238 }, 00:25:30.238 { 00:25:30.238 "method": "bdev_wait_for_examine" 00:25:30.238 } 00:25:30.238 ] 00:25:30.238 }, 00:25:30.238 { 00:25:30.238 "subsystem": "nbd", 00:25:30.238 "config": [] 00:25:30.238 }, 00:25:30.238 { 00:25:30.238 "subsystem": "scheduler", 00:25:30.238 "config": [ 00:25:30.238 { 00:25:30.238 "method": "framework_set_scheduler", 00:25:30.238 "params": { 00:25:30.238 "name": "static" 00:25:30.238 } 00:25:30.238 } 00:25:30.238 ] 00:25:30.238 }, 00:25:30.238 { 00:25:30.238 "subsystem": "nvmf", 00:25:30.238 "config": [ 00:25:30.238 { 00:25:30.238 "method": "nvmf_set_config", 00:25:30.238 "params": { 00:25:30.238 "admin_cmd_passthru": { 00:25:30.238 "identify_ctrlr": false 00:25:30.238 }, 00:25:30.238 "discovery_filter": "match_any" 00:25:30.238 } 00:25:30.238 }, 00:25:30.238 { 00:25:30.238 "method": "nvmf_set_max_subsystems", 00:25:30.238 "params": { 00:25:30.238 "max_subsystems": 1024 00:25:30.238 } 00:25:30.238 }, 00:25:30.238 { 00:25:30.238 "method": "nvmf_set_crdt", 00:25:30.238 "params": { 00:25:30.238 "crdt1": 0, 00:25:30.238 "crdt2": 0, 00:25:30.238 "crdt3": 0 00:25:30.238 } 00:25:30.238 }, 00:25:30.238 { 00:25:30.238 "method": "nvmf_create_transport", 00:25:30.238 "params": { 00:25:30.238 "abort_timeout_sec": 1, 00:25:30.238 "ack_timeout": 0, 00:25:30.238 "buf_cache_size": 4294967295, 00:25:30.238 "c2h_success": false, 00:25:30.238 "data_wr_pool_size": 0, 00:25:30.238 "dif_insert_or_strip": false, 00:25:30.238 "in_capsule_data_size": 4096, 00:25:30.238 "io_unit_size": 131072, 00:25:30.238 "max_aq_depth": 128, 00:25:30.238 "max_io_qpairs_per_ctrlr": 127, 00:25:30.238 "max_io_size": 131072, 00:25:30.238 "max_queue_depth": 128, 00:25:30.238 "num_shared_buffers": 511, 00:25:30.238 "sock_priority": 0, 00:25:30.238 "trtype": "TCP", 00:25:30.238 "zcopy": false 00:25:30.238 } 00:25:30.238 }, 00:25:30.238 { 00:25:30.238 "method": "nvmf_create_subsystem", 00:25:30.238 "params": { 00:25:30.238 "allow_any_host": false, 00:25:30.238 "ana_reporting": false, 00:25:30.238 "max_cntlid": 65519, 00:25:30.238 "max_namespaces": 10, 00:25:30.238 "min_cntlid": 1, 00:25:30.238 "model_number": "SPDK bdev Controller", 00:25:30.238 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:30.238 "serial_number": "SPDK00000000000001" 00:25:30.238 } 00:25:30.238 }, 00:25:30.238 { 00:25:30.238 "method": "nvmf_subsystem_add_host", 00:25:30.238 "params": { 00:25:30.238 "host": "nqn.2016-06.io.spdk:host1", 00:25:30.238 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:30.238 "psk": "/tmp/tmp.fBJPVbgFTx" 00:25:30.238 } 00:25:30.238 }, 00:25:30.238 { 00:25:30.238 "method": "nvmf_subsystem_add_ns", 00:25:30.238 "params": { 00:25:30.238 "namespace": { 00:25:30.238 "bdev_name": "malloc0", 00:25:30.238 "nguid": "776867546D5C4F31AD61BF2A6A4C413E", 00:25:30.238 "no_auto_visible": false, 00:25:30.238 "nsid": 1, 00:25:30.238 "uuid": "77686754-6d5c-4f31-ad61-bf2a6a4c413e" 00:25:30.238 }, 00:25:30.238 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:25:30.238 } 00:25:30.238 }, 00:25:30.238 { 00:25:30.238 "method": "nvmf_subsystem_add_listener", 00:25:30.238 "params": { 00:25:30.238 "listen_address": { 00:25:30.238 "adrfam": "IPv4", 00:25:30.238 "traddr": "10.0.0.2", 00:25:30.238 "trsvcid": "4420", 00:25:30.238 "trtype": "TCP" 00:25:30.238 }, 00:25:30.238 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:30.238 "secure_channel": true 00:25:30.238 } 00:25:30.238 } 00:25:30.238 ] 00:25:30.238 } 00:25:30.238 ] 00:25:30.238 }' 00:25:30.239 15:44:00 -- target/tls.sh@197 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:25:30.496 15:44:00 -- target/tls.sh@197 -- # bdevperfconf='{ 00:25:30.496 "subsystems": [ 00:25:30.496 { 00:25:30.496 "subsystem": "keyring", 00:25:30.496 "config": [] 00:25:30.496 }, 00:25:30.496 { 00:25:30.496 "subsystem": "iobuf", 00:25:30.496 "config": [ 00:25:30.496 { 00:25:30.496 "method": "iobuf_set_options", 00:25:30.496 "params": { 00:25:30.496 "large_bufsize": 135168, 00:25:30.496 "large_pool_count": 1024, 00:25:30.496 "small_bufsize": 8192, 00:25:30.496 "small_pool_count": 8192 00:25:30.496 } 00:25:30.496 } 00:25:30.496 ] 00:25:30.496 }, 00:25:30.496 { 00:25:30.496 "subsystem": "sock", 00:25:30.496 "config": [ 00:25:30.496 { 00:25:30.496 "method": "sock_impl_set_options", 00:25:30.496 "params": { 00:25:30.496 "enable_ktls": false, 00:25:30.496 "enable_placement_id": 0, 00:25:30.496 "enable_quickack": false, 00:25:30.496 "enable_recv_pipe": true, 00:25:30.496 "enable_zerocopy_send_client": false, 00:25:30.496 "enable_zerocopy_send_server": true, 00:25:30.496 "impl_name": "posix", 00:25:30.496 "recv_buf_size": 2097152, 00:25:30.496 "send_buf_size": 2097152, 00:25:30.496 "tls_version": 0, 00:25:30.496 "zerocopy_threshold": 0 00:25:30.496 } 00:25:30.496 }, 00:25:30.496 { 00:25:30.496 "method": "sock_impl_set_options", 00:25:30.496 "params": { 00:25:30.496 "enable_ktls": false, 00:25:30.496 "enable_placement_id": 0, 00:25:30.496 "enable_quickack": false, 00:25:30.496 "enable_recv_pipe": true, 00:25:30.496 "enable_zerocopy_send_client": false, 00:25:30.496 "enable_zerocopy_send_server": true, 00:25:30.496 "impl_name": "ssl", 00:25:30.496 "recv_buf_size": 4096, 00:25:30.496 "send_buf_size": 4096, 00:25:30.496 "tls_version": 0, 00:25:30.496 "zerocopy_threshold": 0 00:25:30.496 } 00:25:30.496 } 00:25:30.496 ] 00:25:30.496 }, 00:25:30.496 { 00:25:30.496 "subsystem": "vmd", 00:25:30.496 "config": [] 00:25:30.496 }, 00:25:30.496 { 00:25:30.496 "subsystem": "accel", 00:25:30.496 "config": [ 00:25:30.496 { 00:25:30.496 "method": "accel_set_options", 00:25:30.496 "params": { 00:25:30.496 "buf_count": 2048, 00:25:30.496 "large_cache_size": 16, 00:25:30.496 "sequence_count": 2048, 00:25:30.496 "small_cache_size": 128, 00:25:30.496 "task_count": 2048 00:25:30.496 } 00:25:30.496 } 00:25:30.496 ] 00:25:30.496 }, 00:25:30.496 { 00:25:30.496 "subsystem": "bdev", 00:25:30.496 "config": [ 00:25:30.496 { 00:25:30.496 "method": "bdev_set_options", 00:25:30.496 "params": { 00:25:30.496 "bdev_auto_examine": true, 00:25:30.496 "bdev_io_cache_size": 256, 00:25:30.496 "bdev_io_pool_size": 65535, 00:25:30.496 "iobuf_large_cache_size": 16, 00:25:30.496 "iobuf_small_cache_size": 128 00:25:30.496 } 00:25:30.496 }, 00:25:30.496 { 00:25:30.496 "method": "bdev_raid_set_options", 00:25:30.496 "params": { 00:25:30.496 "process_window_size_kb": 1024 00:25:30.496 } 00:25:30.497 }, 00:25:30.497 { 00:25:30.497 "method": "bdev_iscsi_set_options", 00:25:30.497 "params": { 00:25:30.497 "timeout_sec": 30 00:25:30.497 } 00:25:30.497 }, 00:25:30.497 { 00:25:30.497 "method": "bdev_nvme_set_options", 00:25:30.497 "params": { 00:25:30.497 "action_on_timeout": "none", 00:25:30.497 "allow_accel_sequence": false, 00:25:30.497 "arbitration_burst": 0, 00:25:30.497 "bdev_retry_count": 3, 00:25:30.497 "ctrlr_loss_timeout_sec": 0, 00:25:30.497 "delay_cmd_submit": true, 00:25:30.497 "dhchap_dhgroups": [ 00:25:30.497 "null", 00:25:30.497 "ffdhe2048", 00:25:30.497 "ffdhe3072", 00:25:30.497 "ffdhe4096", 00:25:30.497 "ffdhe6144", 00:25:30.497 "ffdhe8192" 00:25:30.497 ], 00:25:30.497 "dhchap_digests": [ 00:25:30.497 "sha256", 00:25:30.497 "sha384", 00:25:30.497 "sha512" 00:25:30.497 ], 00:25:30.497 "disable_auto_failback": false, 00:25:30.497 "fast_io_fail_timeout_sec": 0, 00:25:30.497 "generate_uuids": false, 00:25:30.497 "high_priority_weight": 0, 00:25:30.497 "io_path_stat": false, 00:25:30.497 "io_queue_requests": 512, 00:25:30.497 "keep_alive_timeout_ms": 10000, 00:25:30.497 "low_priority_weight": 0, 00:25:30.497 "medium_priority_weight": 0, 00:25:30.497 "nvme_adminq_poll_period_us": 10000, 00:25:30.497 "nvme_error_stat": false, 00:25:30.497 "nvme_ioq_poll_period_us": 0, 00:25:30.497 "rdma_cm_event_timeout_ms": 0, 00:25:30.497 "rdma_max_cq_size": 0, 00:25:30.497 "rdma_srq_size": 0, 00:25:30.497 "reconnect_delay_sec": 0, 00:25:30.497 "timeout_admin_us": 0, 00:25:30.497 "timeout_us": 0, 00:25:30.497 "transport_ack_timeout": 0, 00:25:30.497 "transport_retry_count": 4, 00:25:30.497 "transport_tos": 0 00:25:30.497 } 00:25:30.497 }, 00:25:30.497 { 00:25:30.497 "method": "bdev_nvme_attach_controller", 00:25:30.497 "params": { 00:25:30.497 "adrfam": "IPv4", 00:25:30.497 "ctrlr_loss_timeout_sec": 0, 00:25:30.497 "ddgst": false, 00:25:30.497 "fast_io_fail_timeout_sec": 0, 00:25:30.497 "hdgst": false, 00:25:30.497 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:30.497 "name": "TLSTEST", 00:25:30.497 "prchk_guard": false, 00:25:30.497 "prchk_reftag": false, 00:25:30.497 "psk": "/tmp/tmp.fBJPVbgFTx", 00:25:30.497 "reconnect_delay_sec": 0, 00:25:30.497 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:30.497 "traddr": "10.0.0.2", 00:25:30.497 "trsvcid": "4420", 00:25:30.497 "trtype": "TCP" 00:25:30.497 } 00:25:30.497 }, 00:25:30.497 { 00:25:30.497 "method": "bdev_nvme_set_hotplug", 00:25:30.497 "params": { 00:25:30.497 "enable": false, 00:25:30.497 "period_us": 100000 00:25:30.497 } 00:25:30.497 }, 00:25:30.497 { 00:25:30.497 "method": "bdev_wait_for_examine" 00:25:30.497 } 00:25:30.497 ] 00:25:30.497 }, 00:25:30.497 { 00:25:30.497 "subsystem": "nbd", 00:25:30.497 "config": [] 00:25:30.497 } 00:25:30.497 ] 00:25:30.497 }' 00:25:30.497 15:44:00 -- target/tls.sh@199 -- # killprocess 78184 00:25:30.497 15:44:00 -- common/autotest_common.sh@936 -- # '[' -z 78184 ']' 00:25:30.497 15:44:00 -- common/autotest_common.sh@940 -- # kill -0 78184 00:25:30.497 15:44:00 -- common/autotest_common.sh@941 -- # uname 00:25:30.497 15:44:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:30.497 15:44:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78184 00:25:30.497 15:44:00 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:25:30.497 killing process with pid 78184 00:25:30.497 15:44:00 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:25:30.497 15:44:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78184' 00:25:30.497 15:44:00 -- common/autotest_common.sh@955 -- # kill 78184 00:25:30.497 Received shutdown signal, test time was about 10.000000 seconds 00:25:30.497 00:25:30.497 Latency(us) 00:25:30.497 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:30.497 =================================================================================================================== 00:25:30.497 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:25:30.497 [2024-04-26 15:44:00.768174] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:30.497 15:44:00 -- common/autotest_common.sh@960 -- # wait 78184 00:25:30.755 15:44:01 -- target/tls.sh@200 -- # killprocess 78080 00:25:30.755 15:44:01 -- common/autotest_common.sh@936 -- # '[' -z 78080 ']' 00:25:30.755 15:44:01 -- common/autotest_common.sh@940 -- # kill -0 78080 00:25:30.755 15:44:01 -- common/autotest_common.sh@941 -- # uname 00:25:30.755 15:44:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:30.755 15:44:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78080 00:25:31.013 15:44:01 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:31.013 15:44:01 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:31.013 killing process with pid 78080 00:25:31.013 15:44:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78080' 00:25:31.013 15:44:01 -- common/autotest_common.sh@955 -- # kill 78080 00:25:31.013 [2024-04-26 15:44:01.055099] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:31.013 15:44:01 -- common/autotest_common.sh@960 -- # wait 78080 00:25:31.271 15:44:01 -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:25:31.271 15:44:01 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:31.271 15:44:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:31.271 15:44:01 -- common/autotest_common.sh@10 -- # set +x 00:25:31.271 15:44:01 -- target/tls.sh@203 -- # echo '{ 00:25:31.271 "subsystems": [ 00:25:31.271 { 00:25:31.271 "subsystem": "keyring", 00:25:31.271 "config": [] 00:25:31.271 }, 00:25:31.271 { 00:25:31.271 "subsystem": "iobuf", 00:25:31.271 "config": [ 00:25:31.271 { 00:25:31.271 "method": "iobuf_set_options", 00:25:31.271 "params": { 00:25:31.271 "large_bufsize": 135168, 00:25:31.271 "large_pool_count": 1024, 00:25:31.271 "small_bufsize": 8192, 00:25:31.271 "small_pool_count": 8192 00:25:31.271 } 00:25:31.271 } 00:25:31.271 ] 00:25:31.271 }, 00:25:31.271 { 00:25:31.271 "subsystem": "sock", 00:25:31.271 "config": [ 00:25:31.271 { 00:25:31.271 "method": "sock_impl_set_options", 00:25:31.271 "params": { 00:25:31.271 "enable_ktls": false, 00:25:31.271 "enable_placement_id": 0, 00:25:31.271 "enable_quickack": false, 00:25:31.271 "enable_recv_pipe": true, 00:25:31.271 "enable_zerocopy_send_client": false, 00:25:31.271 "enable_zerocopy_send_server": true, 00:25:31.271 "impl_name": "posix", 00:25:31.271 "recv_buf_size": 2097152, 00:25:31.271 "send_buf_size": 2097152, 00:25:31.271 "tls_version": 0, 00:25:31.271 "zerocopy_threshold": 0 00:25:31.271 } 00:25:31.271 }, 00:25:31.271 { 00:25:31.271 "method": "sock_impl_set_options", 00:25:31.271 "params": { 00:25:31.271 "enable_ktls": false, 00:25:31.271 "enable_placement_id": 0, 00:25:31.271 "enable_quickack": false, 00:25:31.271 "enable_recv_pipe": true, 00:25:31.271 "enable_zerocopy_send_client": false, 00:25:31.271 "enable_zerocopy_send_server": true, 00:25:31.271 "impl_name": "ssl", 00:25:31.271 "recv_buf_size": 4096, 00:25:31.271 "send_buf_size": 4096, 00:25:31.271 "tls_version": 0, 00:25:31.271 "zerocopy_threshold": 0 00:25:31.271 } 00:25:31.271 } 00:25:31.271 ] 00:25:31.271 }, 00:25:31.271 { 00:25:31.271 "subsystem": "vmd", 00:25:31.271 "config": [] 00:25:31.271 }, 00:25:31.271 { 00:25:31.271 "subsystem": "accel", 00:25:31.271 "config": [ 00:25:31.271 { 00:25:31.271 "method": "accel_set_options", 00:25:31.271 "params": { 00:25:31.271 "buf_count": 2048, 00:25:31.271 "large_cache_size": 16, 00:25:31.271 "sequence_count": 2048, 00:25:31.271 "small_cache_size": 128, 00:25:31.271 "task_count": 2048 00:25:31.271 } 00:25:31.271 } 00:25:31.271 ] 00:25:31.271 }, 00:25:31.271 { 00:25:31.271 "subsystem": "bdev", 00:25:31.271 "config": [ 00:25:31.271 { 00:25:31.271 "method": "bdev_set_options", 00:25:31.271 "params": { 00:25:31.271 "bdev_auto_examine": true, 00:25:31.271 "bdev_io_cache_size": 256, 00:25:31.271 "bdev_io_pool_size": 65535, 00:25:31.271 "iobuf_large_cache_size": 16, 00:25:31.271 "iobuf_small_cache_size": 128 00:25:31.271 } 00:25:31.271 }, 00:25:31.271 { 00:25:31.271 "method": "bdev_raid_set_options", 00:25:31.271 "params": { 00:25:31.271 "process_window_size_kb": 1024 00:25:31.271 } 00:25:31.271 }, 00:25:31.271 { 00:25:31.271 "method": "bdev_iscsi_set_options", 00:25:31.271 "params": { 00:25:31.271 "timeout_sec": 30 00:25:31.271 } 00:25:31.271 }, 00:25:31.271 { 00:25:31.271 "method": "bdev_nvme_set_options", 00:25:31.271 "params": { 00:25:31.271 "action_on_timeout": "none", 00:25:31.271 "allow_accel_sequence": false, 00:25:31.271 "arbitration_burst": 0, 00:25:31.271 "bdev_retry_count": 3, 00:25:31.271 "ctrlr_loss_timeout_sec": 0, 00:25:31.271 "delay_cmd_submit": true, 00:25:31.271 "dhchap_dhgroups": [ 00:25:31.271 "null", 00:25:31.271 "ffdhe2048", 00:25:31.271 "ffdhe3072", 00:25:31.272 "ffdhe4096", 00:25:31.272 "ffdhe6144", 00:25:31.272 "ffdhe8192" 00:25:31.272 ], 00:25:31.272 "dhchap_digests": [ 00:25:31.272 "sha256", 00:25:31.272 "sha384", 00:25:31.272 "sha512" 00:25:31.272 ], 00:25:31.272 "disable_auto_failback": false, 00:25:31.272 "fast_io_fail_timeout_sec": 0, 00:25:31.272 "generate_uuids": false, 00:25:31.272 "high_priority_weight": 0, 00:25:31.272 "io_path_stat": false, 00:25:31.272 "io_queue_requests": 0, 00:25:31.272 "keep_alive_timeout_ms": 10000, 00:25:31.272 "low_priority_weight": 0, 00:25:31.272 "medium_priority_weight": 0, 00:25:31.272 "nvme_adminq_poll_period_us": 10000, 00:25:31.272 "nvme_error_stat": false, 00:25:31.272 "nvme_ioq_poll_period_us": 0, 00:25:31.272 "rdma_cm_event_timeout_ms": 0, 00:25:31.272 "rdma_max_cq_size": 0, 00:25:31.272 "rdma_srq_size": 0, 00:25:31.272 "reconnect_delay_sec": 0, 00:25:31.272 "timeout_admin_us": 0, 00:25:31.272 "timeout_us": 0, 00:25:31.272 "transport_ack_timeout": 0, 00:25:31.272 "transport_retry_count": 4, 00:25:31.272 "transport_tos": 0 00:25:31.272 } 00:25:31.272 }, 00:25:31.272 { 00:25:31.272 "method": "bdev_nvme_set_hotplug", 00:25:31.272 "params": { 00:25:31.272 "enable": false, 00:25:31.272 "period_us": 100000 00:25:31.272 } 00:25:31.272 }, 00:25:31.272 { 00:25:31.272 "method": "bdev_malloc_create", 00:25:31.272 "params": { 00:25:31.272 "block_size": 4096, 00:25:31.272 "name": "malloc0", 00:25:31.272 "num_blocks": 8192, 00:25:31.272 "optimal_io_boundary": 0, 00:25:31.272 "physical_block_size": 4096, 00:25:31.272 "uuid": "77686754-6d5c-4f31-ad61-bf2a6a4c413e" 00:25:31.272 } 00:25:31.272 }, 00:25:31.272 { 00:25:31.272 "method": "bdev_wait_for_examine" 00:25:31.272 } 00:25:31.272 ] 00:25:31.272 }, 00:25:31.272 { 00:25:31.272 "subsystem": "nbd", 00:25:31.272 "config": [] 00:25:31.272 }, 00:25:31.272 { 00:25:31.272 "subsystem": "scheduler", 00:25:31.272 "config": [ 00:25:31.272 { 00:25:31.272 "method": "framework_set_scheduler", 00:25:31.272 "params": { 00:25:31.272 "name": "static" 00:25:31.272 } 00:25:31.272 } 00:25:31.272 ] 00:25:31.272 }, 00:25:31.272 { 00:25:31.272 "subsystem": "nvmf", 00:25:31.272 "config": [ 00:25:31.272 { 00:25:31.272 "method": "nvmf_set_config", 00:25:31.272 "params": { 00:25:31.272 "admin_cmd_passthru": { 00:25:31.272 "identify_ctrlr": false 00:25:31.272 }, 00:25:31.272 "discovery_filter": "match_any" 00:25:31.272 } 00:25:31.272 }, 00:25:31.272 { 00:25:31.272 "method": "nvmf_set_max_subsystems", 00:25:31.272 "params": { 00:25:31.272 "max_subsystems": 1024 00:25:31.272 } 00:25:31.272 }, 00:25:31.272 { 00:25:31.272 "method": "nvmf_set_crdt", 00:25:31.272 "params": { 00:25:31.272 "crdt1": 0, 00:25:31.272 "crdt2": 0, 00:25:31.272 "crdt3": 0 00:25:31.272 } 00:25:31.272 }, 00:25:31.272 { 00:25:31.272 "method": "nvmf_create_transport", 00:25:31.272 "params": { 00:25:31.272 "abort_timeout_sec": 1, 00:25:31.272 "ack_timeout": 0, 00:25:31.272 "buf_cache_size": 4294967295, 00:25:31.272 "c2h_success": false, 00:25:31.272 "data_wr_pool_size": 0, 00:25:31.272 "dif_insert_or_strip": false, 00:25:31.272 "in_capsule_data_size": 4096, 00:25:31.272 "io_unit_size": 131072, 00:25:31.272 "max_aq_depth": 128, 00:25:31.272 "max_io_qpairs_per_ctrlr": 127, 00:25:31.272 "max_io_size": 131072, 00:25:31.272 "max_queue_depth": 128, 00:25:31.272 "num_shared_buffers": 511, 00:25:31.272 "sock_priority": 0, 00:25:31.272 "trtype": "TCP", 00:25:31.272 "zcopy": false 00:25:31.272 } 00:25:31.272 }, 00:25:31.272 { 00:25:31.272 "method": "nvmf_create_subsystem", 00:25:31.272 "params": { 00:25:31.272 "allow_any_host": false, 00:25:31.272 "ana_reporting": false, 00:25:31.272 "max_cntlid": 65519, 00:25:31.272 "max_namespaces": 10, 00:25:31.272 "min_cntlid": 1, 00:25:31.272 "model_number": "SPDK bdev Controller", 00:25:31.272 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:31.272 "serial_number": "SPDK00000000000001" 00:25:31.272 } 00:25:31.272 }, 00:25:31.272 { 00:25:31.272 "method": "nvmf_subsystem_add_host", 00:25:31.272 "params": { 00:25:31.272 "host": "nqn.2016-06.io.spdk:host1", 00:25:31.272 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:31.272 "psk": "/tmp/tmp.fBJPVbgFTx" 00:25:31.272 } 00:25:31.272 }, 00:25:31.272 { 00:25:31.272 "method": "nvmf_subsystem_add_ns", 00:25:31.272 "params": { 00:25:31.272 "namespace": { 00:25:31.272 "bdev_name": "malloc0", 00:25:31.272 "nguid": "776867546D5C4F31AD61BF2A6A4C413E", 00:25:31.272 "no_auto_visible": false, 00:25:31.272 "nsid": 1, 00:25:31.272 "uuid": "77686754-6d5c-4f31-ad61-bf2a6a4c413e" 00:25:31.272 }, 00:25:31.272 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:25:31.272 } 00:25:31.272 }, 00:25:31.272 { 00:25:31.272 "method": "nvmf_subsystem_add_listener", 00:25:31.272 "params": { 00:25:31.272 "listen_address": { 00:25:31.272 "adrfam": "IPv4", 00:25:31.272 "traddr": "10.0.0.2", 00:25:31.272 "trsvcid": "4420", 00:25:31.272 "trtype": "TCP" 00:25:31.272 }, 00:25:31.272 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:31.272 "secure_channel": true 00:25:31.272 } 00:25:31.272 } 00:25:31.272 ] 00:25:31.272 } 00:25:31.272 ] 00:25:31.272 }' 00:25:31.272 15:44:01 -- nvmf/common.sh@470 -- # nvmfpid=78257 00:25:31.272 15:44:01 -- nvmf/common.sh@471 -- # waitforlisten 78257 00:25:31.272 15:44:01 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:25:31.272 15:44:01 -- common/autotest_common.sh@817 -- # '[' -z 78257 ']' 00:25:31.272 15:44:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:31.272 15:44:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:31.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:31.272 15:44:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:31.272 15:44:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:31.272 15:44:01 -- common/autotest_common.sh@10 -- # set +x 00:25:31.272 [2024-04-26 15:44:01.387131] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:25:31.272 [2024-04-26 15:44:01.387250] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:31.272 [2024-04-26 15:44:01.521770] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:31.530 [2024-04-26 15:44:01.643240] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:31.530 [2024-04-26 15:44:01.643291] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:31.530 [2024-04-26 15:44:01.643302] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:31.530 [2024-04-26 15:44:01.643311] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:31.530 [2024-04-26 15:44:01.643318] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:31.530 [2024-04-26 15:44:01.643418] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:31.788 [2024-04-26 15:44:01.866363] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:31.788 [2024-04-26 15:44:01.882293] tcp.c:3655:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:25:31.788 [2024-04-26 15:44:01.898296] tcp.c: 926:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:31.788 [2024-04-26 15:44:01.898534] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:32.353 15:44:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:32.353 15:44:02 -- common/autotest_common.sh@850 -- # return 0 00:25:32.353 15:44:02 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:32.353 15:44:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:32.353 15:44:02 -- common/autotest_common.sh@10 -- # set +x 00:25:32.353 15:44:02 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:32.353 15:44:02 -- target/tls.sh@207 -- # bdevperf_pid=78301 00:25:32.353 15:44:02 -- target/tls.sh@208 -- # waitforlisten 78301 /var/tmp/bdevperf.sock 00:25:32.353 15:44:02 -- common/autotest_common.sh@817 -- # '[' -z 78301 ']' 00:25:32.353 15:44:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:32.353 15:44:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:32.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:32.353 15:44:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:32.353 15:44:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:32.353 15:44:02 -- common/autotest_common.sh@10 -- # set +x 00:25:32.353 15:44:02 -- target/tls.sh@204 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:25:32.353 15:44:02 -- target/tls.sh@204 -- # echo '{ 00:25:32.353 "subsystems": [ 00:25:32.353 { 00:25:32.353 "subsystem": "keyring", 00:25:32.353 "config": [] 00:25:32.353 }, 00:25:32.353 { 00:25:32.353 "subsystem": "iobuf", 00:25:32.353 "config": [ 00:25:32.353 { 00:25:32.353 "method": "iobuf_set_options", 00:25:32.353 "params": { 00:25:32.353 "large_bufsize": 135168, 00:25:32.353 "large_pool_count": 1024, 00:25:32.353 "small_bufsize": 8192, 00:25:32.353 "small_pool_count": 8192 00:25:32.353 } 00:25:32.353 } 00:25:32.353 ] 00:25:32.353 }, 00:25:32.353 { 00:25:32.353 "subsystem": "sock", 00:25:32.353 "config": [ 00:25:32.353 { 00:25:32.353 "method": "sock_impl_set_options", 00:25:32.353 "params": { 00:25:32.353 "enable_ktls": false, 00:25:32.353 "enable_placement_id": 0, 00:25:32.353 "enable_quickack": false, 00:25:32.353 "enable_recv_pipe": true, 00:25:32.353 "enable_zerocopy_send_client": false, 00:25:32.353 "enable_zerocopy_send_server": true, 00:25:32.353 "impl_name": "posix", 00:25:32.353 "recv_buf_size": 2097152, 00:25:32.353 "send_buf_size": 2097152, 00:25:32.353 "tls_version": 0, 00:25:32.353 "zerocopy_threshold": 0 00:25:32.353 } 00:25:32.353 }, 00:25:32.353 { 00:25:32.353 "method": "sock_impl_set_options", 00:25:32.353 "params": { 00:25:32.353 "enable_ktls": false, 00:25:32.353 "enable_placement_id": 0, 00:25:32.353 "enable_quickack": false, 00:25:32.353 "enable_recv_pipe": true, 00:25:32.353 "enable_zerocopy_send_client": false, 00:25:32.353 "enable_zerocopy_send_server": true, 00:25:32.353 "impl_name": "ssl", 00:25:32.353 "recv_buf_size": 4096, 00:25:32.353 "send_buf_size": 4096, 00:25:32.353 "tls_version": 0, 00:25:32.353 "zerocopy_threshold": 0 00:25:32.353 } 00:25:32.353 } 00:25:32.353 ] 00:25:32.353 }, 00:25:32.353 { 00:25:32.353 "subsystem": "vmd", 00:25:32.353 "config": [] 00:25:32.354 }, 00:25:32.354 { 00:25:32.354 "subsystem": "accel", 00:25:32.354 "config": [ 00:25:32.354 { 00:25:32.354 "method": "accel_set_options", 00:25:32.354 "params": { 00:25:32.354 "buf_count": 2048, 00:25:32.354 "large_cache_size": 16, 00:25:32.354 "sequence_count": 2048, 00:25:32.354 "small_cache_size": 128, 00:25:32.354 "task_count": 2048 00:25:32.354 } 00:25:32.354 } 00:25:32.354 ] 00:25:32.354 }, 00:25:32.354 { 00:25:32.354 "subsystem": "bdev", 00:25:32.354 "config": [ 00:25:32.354 { 00:25:32.354 "method": "bdev_set_options", 00:25:32.354 "params": { 00:25:32.354 "bdev_auto_examine": true, 00:25:32.354 "bdev_io_cache_size": 256, 00:25:32.354 "bdev_io_pool_size": 65535, 00:25:32.354 "iobuf_large_cache_size": 16, 00:25:32.354 "iobuf_small_cache_size": 128 00:25:32.354 } 00:25:32.354 }, 00:25:32.354 { 00:25:32.354 "method": "bdev_raid_set_options", 00:25:32.354 "params": { 00:25:32.354 "process_window_size_kb": 1024 00:25:32.354 } 00:25:32.354 }, 00:25:32.354 { 00:25:32.354 "method": "bdev_iscsi_set_options", 00:25:32.354 "params": { 00:25:32.354 "timeout_sec": 30 00:25:32.354 } 00:25:32.354 }, 00:25:32.354 { 00:25:32.354 "method": "bdev_nvme_set_options", 00:25:32.354 "params": { 00:25:32.354 "action_on_timeout": "none", 00:25:32.354 "allow_accel_sequence": false, 00:25:32.354 "arbitration_burst": 0, 00:25:32.354 "bdev_retry_count": 3, 00:25:32.354 "ctrlr_loss_timeout_sec": 0, 00:25:32.354 "delay_cmd_submit": true, 00:25:32.354 "dhchap_dhgroups": [ 00:25:32.354 "null", 00:25:32.354 "ffdhe2048", 00:25:32.354 "ffdhe3072", 00:25:32.354 "ffdhe4096", 00:25:32.354 "ffdhe6144", 00:25:32.354 "ffdhe8192" 00:25:32.354 ], 00:25:32.354 "dhchap_digests": [ 00:25:32.354 "sha256", 00:25:32.354 "sha384", 00:25:32.354 "sha512" 00:25:32.354 ], 00:25:32.354 "disable_auto_failback": false, 00:25:32.354 "fast_io_fail_timeout_sec": 0, 00:25:32.354 "generate_uuids": false, 00:25:32.354 "high_priority_weight": 0, 00:25:32.354 "io_path_stat": false, 00:25:32.354 "io_queue_requests": 512, 00:25:32.354 "keep_alive_timeout_ms": 10000, 00:25:32.354 "low_priority_weight": 0, 00:25:32.354 "medium_priority_weight": 0, 00:25:32.354 "nvme_adminq_poll_period_us": 10000, 00:25:32.354 "nvme_error_stat": false, 00:25:32.354 "nvme_ioq_poll_period_us": 0, 00:25:32.354 "rdma_cm_event_timeout_ms": 0, 00:25:32.354 "rdma_max_cq_size": 0, 00:25:32.354 "rdma_srq_size": 0, 00:25:32.354 "reconnect_delay_sec": 0, 00:25:32.354 "timeout_admin_us": 0, 00:25:32.354 "timeout_us": 0, 00:25:32.354 "transport_ack_timeout": 0, 00:25:32.354 "transport_retry_count": 4, 00:25:32.354 "transport_tos": 0 00:25:32.354 } 00:25:32.354 }, 00:25:32.354 { 00:25:32.354 "method": "bdev_nvme_attach_controller", 00:25:32.354 "params": { 00:25:32.354 "adrfam": "IPv4", 00:25:32.354 "ctrlr_loss_timeout_sec": 0, 00:25:32.354 "ddgst": false, 00:25:32.354 "fast_io_fail_timeout_sec": 0, 00:25:32.354 "hdgst": false, 00:25:32.354 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:32.354 "name": "TLSTEST", 00:25:32.354 "prchk_guard": false, 00:25:32.354 "prchk_reftag": false, 00:25:32.354 "psk": "/tmp/tmp.fBJPVbgFTx", 00:25:32.354 "reconnect_delay_sec": 0, 00:25:32.354 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:32.354 "traddr": "10.0.0.2", 00:25:32.354 "trsvcid": "4420", 00:25:32.354 "trtype": "TCP" 00:25:32.354 } 00:25:32.354 }, 00:25:32.354 { 00:25:32.354 "method": "bdev_nvme_set_hotplug", 00:25:32.354 "params": { 00:25:32.354 "enable": false, 00:25:32.354 "period_us": 100000 00:25:32.354 } 00:25:32.354 }, 00:25:32.354 { 00:25:32.354 "method": "bdev_wait_for_examine" 00:25:32.354 } 00:25:32.354 ] 00:25:32.354 }, 00:25:32.354 { 00:25:32.354 "subsystem": "nbd", 00:25:32.354 "config": [] 00:25:32.354 } 00:25:32.354 ] 00:25:32.354 }' 00:25:32.354 [2024-04-26 15:44:02.452327] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:25:32.354 [2024-04-26 15:44:02.452420] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78301 ] 00:25:32.354 [2024-04-26 15:44:02.585337] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:32.612 [2024-04-26 15:44:02.734982] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:25:32.612 [2024-04-26 15:44:02.893791] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:32.612 [2024-04-26 15:44:02.893917] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:25:33.543 15:44:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:33.543 15:44:03 -- common/autotest_common.sh@850 -- # return 0 00:25:33.543 15:44:03 -- target/tls.sh@211 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:25:33.543 Running I/O for 10 seconds... 00:25:43.579 00:25:43.579 Latency(us) 00:25:43.579 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:43.579 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:25:43.579 Verification LBA range: start 0x0 length 0x2000 00:25:43.579 TLSTESTn1 : 10.03 3815.59 14.90 0.00 0.00 33480.52 10366.60 22758.87 00:25:43.579 =================================================================================================================== 00:25:43.579 Total : 3815.59 14.90 0.00 0.00 33480.52 10366.60 22758.87 00:25:43.579 0 00:25:43.579 15:44:13 -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:43.579 15:44:13 -- target/tls.sh@214 -- # killprocess 78301 00:25:43.579 15:44:13 -- common/autotest_common.sh@936 -- # '[' -z 78301 ']' 00:25:43.579 15:44:13 -- common/autotest_common.sh@940 -- # kill -0 78301 00:25:43.579 15:44:13 -- common/autotest_common.sh@941 -- # uname 00:25:43.579 15:44:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:43.579 15:44:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78301 00:25:43.579 15:44:13 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:25:43.579 15:44:13 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:25:43.579 killing process with pid 78301 00:25:43.579 15:44:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78301' 00:25:43.579 15:44:13 -- common/autotest_common.sh@955 -- # kill 78301 00:25:43.579 Received shutdown signal, test time was about 10.000000 seconds 00:25:43.579 00:25:43.579 Latency(us) 00:25:43.579 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:43.579 =================================================================================================================== 00:25:43.579 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:43.579 [2024-04-26 15:44:13.663832] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:25:43.579 15:44:13 -- common/autotest_common.sh@960 -- # wait 78301 00:25:43.838 15:44:13 -- target/tls.sh@215 -- # killprocess 78257 00:25:43.838 15:44:13 -- common/autotest_common.sh@936 -- # '[' -z 78257 ']' 00:25:43.838 15:44:13 -- common/autotest_common.sh@940 -- # kill -0 78257 00:25:43.838 15:44:13 -- common/autotest_common.sh@941 -- # uname 00:25:43.838 15:44:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:43.838 15:44:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78257 00:25:43.838 15:44:13 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:43.838 15:44:13 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:43.838 killing process with pid 78257 00:25:43.838 15:44:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78257' 00:25:43.838 15:44:13 -- common/autotest_common.sh@955 -- # kill 78257 00:25:43.838 [2024-04-26 15:44:13.939985] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:43.838 15:44:13 -- common/autotest_common.sh@960 -- # wait 78257 00:25:44.096 15:44:14 -- target/tls.sh@218 -- # nvmfappstart 00:25:44.096 15:44:14 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:44.096 15:44:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:44.096 15:44:14 -- common/autotest_common.sh@10 -- # set +x 00:25:44.096 15:44:14 -- nvmf/common.sh@470 -- # nvmfpid=78459 00:25:44.096 15:44:14 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:44.096 15:44:14 -- nvmf/common.sh@471 -- # waitforlisten 78459 00:25:44.096 15:44:14 -- common/autotest_common.sh@817 -- # '[' -z 78459 ']' 00:25:44.096 15:44:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:44.096 15:44:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:44.096 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:44.096 15:44:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:44.096 15:44:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:44.096 15:44:14 -- common/autotest_common.sh@10 -- # set +x 00:25:44.096 [2024-04-26 15:44:14.266609] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:25:44.096 [2024-04-26 15:44:14.266705] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:44.354 [2024-04-26 15:44:14.406746] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:44.354 [2024-04-26 15:44:14.531334] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:44.354 [2024-04-26 15:44:14.531409] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:44.354 [2024-04-26 15:44:14.531423] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:44.354 [2024-04-26 15:44:14.531433] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:44.354 [2024-04-26 15:44:14.531443] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:44.354 [2024-04-26 15:44:14.531492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:44.611 15:44:14 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:44.611 15:44:14 -- common/autotest_common.sh@850 -- # return 0 00:25:44.611 15:44:14 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:44.611 15:44:14 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:44.611 15:44:14 -- common/autotest_common.sh@10 -- # set +x 00:25:44.611 15:44:14 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:44.611 15:44:14 -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.fBJPVbgFTx 00:25:44.611 15:44:14 -- target/tls.sh@49 -- # local key=/tmp/tmp.fBJPVbgFTx 00:25:44.611 15:44:14 -- target/tls.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:44.869 [2024-04-26 15:44:14.917445] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:44.869 15:44:14 -- target/tls.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:25:45.136 15:44:15 -- target/tls.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:25:45.409 [2024-04-26 15:44:15.481562] tcp.c: 926:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:45.409 [2024-04-26 15:44:15.481791] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:45.409 15:44:15 -- target/tls.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:25:45.667 malloc0 00:25:45.667 15:44:15 -- target/tls.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:25:45.926 15:44:16 -- target/tls.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.fBJPVbgFTx 00:25:46.184 [2024-04-26 15:44:16.301677] tcp.c:3655:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:25:46.184 15:44:16 -- target/tls.sh@222 -- # bdevperf_pid=78543 00:25:46.184 15:44:16 -- target/tls.sh@220 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:25:46.184 15:44:16 -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:25:46.184 15:44:16 -- target/tls.sh@225 -- # waitforlisten 78543 /var/tmp/bdevperf.sock 00:25:46.184 15:44:16 -- common/autotest_common.sh@817 -- # '[' -z 78543 ']' 00:25:46.184 15:44:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:46.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:46.184 15:44:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:46.184 15:44:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:46.184 15:44:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:46.184 15:44:16 -- common/autotest_common.sh@10 -- # set +x 00:25:46.184 [2024-04-26 15:44:16.366998] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:25:46.184 [2024-04-26 15:44:16.367074] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78543 ] 00:25:46.443 [2024-04-26 15:44:16.503534] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:46.443 [2024-04-26 15:44:16.630000] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:47.375 15:44:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:47.375 15:44:17 -- common/autotest_common.sh@850 -- # return 0 00:25:47.375 15:44:17 -- target/tls.sh@227 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.fBJPVbgFTx 00:25:47.375 15:44:17 -- target/tls.sh@228 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:25:47.633 [2024-04-26 15:44:17.848539] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:47.633 nvme0n1 00:25:47.891 15:44:17 -- target/tls.sh@232 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:47.891 Running I/O for 1 seconds... 00:25:48.824 00:25:48.824 Latency(us) 00:25:48.824 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:48.824 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:48.824 Verification LBA range: start 0x0 length 0x2000 00:25:48.824 nvme0n1 : 1.02 3900.79 15.24 0.00 0.00 32445.20 10724.07 23235.49 00:25:48.824 =================================================================================================================== 00:25:48.824 Total : 3900.79 15.24 0.00 0.00 32445.20 10724.07 23235.49 00:25:48.824 0 00:25:48.824 15:44:19 -- target/tls.sh@234 -- # killprocess 78543 00:25:48.824 15:44:19 -- common/autotest_common.sh@936 -- # '[' -z 78543 ']' 00:25:48.824 15:44:19 -- common/autotest_common.sh@940 -- # kill -0 78543 00:25:48.824 15:44:19 -- common/autotest_common.sh@941 -- # uname 00:25:48.824 15:44:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:48.824 15:44:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78543 00:25:49.082 15:44:19 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:49.082 15:44:19 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:49.082 killing process with pid 78543 00:25:49.082 15:44:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78543' 00:25:49.082 Received shutdown signal, test time was about 1.000000 seconds 00:25:49.082 00:25:49.082 Latency(us) 00:25:49.082 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:49.082 =================================================================================================================== 00:25:49.082 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:49.082 15:44:19 -- common/autotest_common.sh@955 -- # kill 78543 00:25:49.082 15:44:19 -- common/autotest_common.sh@960 -- # wait 78543 00:25:49.339 15:44:19 -- target/tls.sh@235 -- # killprocess 78459 00:25:49.339 15:44:19 -- common/autotest_common.sh@936 -- # '[' -z 78459 ']' 00:25:49.339 15:44:19 -- common/autotest_common.sh@940 -- # kill -0 78459 00:25:49.339 15:44:19 -- common/autotest_common.sh@941 -- # uname 00:25:49.339 15:44:19 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:49.339 15:44:19 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78459 00:25:49.339 15:44:19 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:49.339 killing process with pid 78459 00:25:49.339 15:44:19 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:49.339 15:44:19 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78459' 00:25:49.339 15:44:19 -- common/autotest_common.sh@955 -- # kill 78459 00:25:49.339 [2024-04-26 15:44:19.418234] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:25:49.339 15:44:19 -- common/autotest_common.sh@960 -- # wait 78459 00:25:49.598 15:44:19 -- target/tls.sh@238 -- # nvmfappstart 00:25:49.598 15:44:19 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:49.598 15:44:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:49.598 15:44:19 -- common/autotest_common.sh@10 -- # set +x 00:25:49.598 15:44:19 -- nvmf/common.sh@470 -- # nvmfpid=78618 00:25:49.598 15:44:19 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:49.598 15:44:19 -- nvmf/common.sh@471 -- # waitforlisten 78618 00:25:49.598 15:44:19 -- common/autotest_common.sh@817 -- # '[' -z 78618 ']' 00:25:49.598 15:44:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:49.598 15:44:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:49.598 15:44:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:49.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:49.598 15:44:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:49.598 15:44:19 -- common/autotest_common.sh@10 -- # set +x 00:25:49.598 [2024-04-26 15:44:19.762994] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:25:49.598 [2024-04-26 15:44:19.763092] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:49.856 [2024-04-26 15:44:19.901266] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:49.856 [2024-04-26 15:44:20.018088] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:49.856 [2024-04-26 15:44:20.018169] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:49.856 [2024-04-26 15:44:20.018182] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:49.856 [2024-04-26 15:44:20.018201] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:49.856 [2024-04-26 15:44:20.018208] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:49.856 [2024-04-26 15:44:20.018240] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:50.791 15:44:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:50.791 15:44:20 -- common/autotest_common.sh@850 -- # return 0 00:25:50.791 15:44:20 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:50.791 15:44:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:50.791 15:44:20 -- common/autotest_common.sh@10 -- # set +x 00:25:50.791 15:44:20 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:50.791 15:44:20 -- target/tls.sh@239 -- # rpc_cmd 00:25:50.791 15:44:20 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:50.791 15:44:20 -- common/autotest_common.sh@10 -- # set +x 00:25:50.791 [2024-04-26 15:44:20.800550] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:50.791 malloc0 00:25:50.791 [2024-04-26 15:44:20.832068] tcp.c: 926:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:50.791 [2024-04-26 15:44:20.832275] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:50.791 15:44:20 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:50.791 15:44:20 -- target/tls.sh@252 -- # bdevperf_pid=78669 00:25:50.791 15:44:20 -- target/tls.sh@250 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:25:50.791 15:44:20 -- target/tls.sh@254 -- # waitforlisten 78669 /var/tmp/bdevperf.sock 00:25:50.791 15:44:20 -- common/autotest_common.sh@817 -- # '[' -z 78669 ']' 00:25:50.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:50.791 15:44:20 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:50.791 15:44:20 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:50.791 15:44:20 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:50.791 15:44:20 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:50.791 15:44:20 -- common/autotest_common.sh@10 -- # set +x 00:25:50.791 [2024-04-26 15:44:20.912097] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:25:50.791 [2024-04-26 15:44:20.912193] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78669 ] 00:25:50.791 [2024-04-26 15:44:21.050968] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:51.049 [2024-04-26 15:44:21.183011] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:51.987 15:44:21 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:51.987 15:44:21 -- common/autotest_common.sh@850 -- # return 0 00:25:51.987 15:44:21 -- target/tls.sh@255 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.fBJPVbgFTx 00:25:51.987 15:44:22 -- target/tls.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:25:52.245 [2024-04-26 15:44:22.466568] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:52.245 nvme0n1 00:25:52.503 15:44:22 -- target/tls.sh@260 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:52.503 Running I/O for 1 seconds... 00:25:53.438 00:25:53.438 Latency(us) 00:25:53.438 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:53.438 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:53.438 Verification LBA range: start 0x0 length 0x2000 00:25:53.438 nvme0n1 : 1.02 3799.24 14.84 0.00 0.00 33197.64 5481.19 19899.11 00:25:53.438 =================================================================================================================== 00:25:53.438 Total : 3799.24 14.84 0.00 0.00 33197.64 5481.19 19899.11 00:25:53.438 0 00:25:53.696 15:44:23 -- target/tls.sh@263 -- # rpc_cmd save_config 00:25:53.696 15:44:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:25:53.696 15:44:23 -- common/autotest_common.sh@10 -- # set +x 00:25:53.696 15:44:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:25:53.696 15:44:23 -- target/tls.sh@263 -- # tgtcfg='{ 00:25:53.696 "subsystems": [ 00:25:53.696 { 00:25:53.696 "subsystem": "keyring", 00:25:53.696 "config": [ 00:25:53.696 { 00:25:53.696 "method": "keyring_file_add_key", 00:25:53.696 "params": { 00:25:53.696 "name": "key0", 00:25:53.696 "path": "/tmp/tmp.fBJPVbgFTx" 00:25:53.696 } 00:25:53.696 } 00:25:53.696 ] 00:25:53.696 }, 00:25:53.696 { 00:25:53.696 "subsystem": "iobuf", 00:25:53.696 "config": [ 00:25:53.696 { 00:25:53.696 "method": "iobuf_set_options", 00:25:53.696 "params": { 00:25:53.696 "large_bufsize": 135168, 00:25:53.696 "large_pool_count": 1024, 00:25:53.696 "small_bufsize": 8192, 00:25:53.696 "small_pool_count": 8192 00:25:53.696 } 00:25:53.696 } 00:25:53.696 ] 00:25:53.696 }, 00:25:53.696 { 00:25:53.696 "subsystem": "sock", 00:25:53.696 "config": [ 00:25:53.696 { 00:25:53.696 "method": "sock_impl_set_options", 00:25:53.696 "params": { 00:25:53.696 "enable_ktls": false, 00:25:53.696 "enable_placement_id": 0, 00:25:53.696 "enable_quickack": false, 00:25:53.696 "enable_recv_pipe": true, 00:25:53.696 "enable_zerocopy_send_client": false, 00:25:53.696 "enable_zerocopy_send_server": true, 00:25:53.696 "impl_name": "posix", 00:25:53.696 "recv_buf_size": 2097152, 00:25:53.696 "send_buf_size": 2097152, 00:25:53.696 "tls_version": 0, 00:25:53.696 "zerocopy_threshold": 0 00:25:53.696 } 00:25:53.696 }, 00:25:53.696 { 00:25:53.696 "method": "sock_impl_set_options", 00:25:53.696 "params": { 00:25:53.696 "enable_ktls": false, 00:25:53.696 "enable_placement_id": 0, 00:25:53.696 "enable_quickack": false, 00:25:53.696 "enable_recv_pipe": true, 00:25:53.696 "enable_zerocopy_send_client": false, 00:25:53.696 "enable_zerocopy_send_server": true, 00:25:53.696 "impl_name": "ssl", 00:25:53.696 "recv_buf_size": 4096, 00:25:53.696 "send_buf_size": 4096, 00:25:53.696 "tls_version": 0, 00:25:53.696 "zerocopy_threshold": 0 00:25:53.696 } 00:25:53.696 } 00:25:53.696 ] 00:25:53.696 }, 00:25:53.696 { 00:25:53.696 "subsystem": "vmd", 00:25:53.696 "config": [] 00:25:53.696 }, 00:25:53.696 { 00:25:53.696 "subsystem": "accel", 00:25:53.696 "config": [ 00:25:53.696 { 00:25:53.696 "method": "accel_set_options", 00:25:53.696 "params": { 00:25:53.696 "buf_count": 2048, 00:25:53.696 "large_cache_size": 16, 00:25:53.696 "sequence_count": 2048, 00:25:53.696 "small_cache_size": 128, 00:25:53.696 "task_count": 2048 00:25:53.696 } 00:25:53.696 } 00:25:53.696 ] 00:25:53.696 }, 00:25:53.696 { 00:25:53.696 "subsystem": "bdev", 00:25:53.696 "config": [ 00:25:53.696 { 00:25:53.696 "method": "bdev_set_options", 00:25:53.696 "params": { 00:25:53.696 "bdev_auto_examine": true, 00:25:53.696 "bdev_io_cache_size": 256, 00:25:53.696 "bdev_io_pool_size": 65535, 00:25:53.696 "iobuf_large_cache_size": 16, 00:25:53.696 "iobuf_small_cache_size": 128 00:25:53.696 } 00:25:53.697 }, 00:25:53.697 { 00:25:53.697 "method": "bdev_raid_set_options", 00:25:53.697 "params": { 00:25:53.697 "process_window_size_kb": 1024 00:25:53.697 } 00:25:53.697 }, 00:25:53.697 { 00:25:53.697 "method": "bdev_iscsi_set_options", 00:25:53.697 "params": { 00:25:53.697 "timeout_sec": 30 00:25:53.697 } 00:25:53.697 }, 00:25:53.697 { 00:25:53.697 "method": "bdev_nvme_set_options", 00:25:53.697 "params": { 00:25:53.697 "action_on_timeout": "none", 00:25:53.697 "allow_accel_sequence": false, 00:25:53.697 "arbitration_burst": 0, 00:25:53.697 "bdev_retry_count": 3, 00:25:53.697 "ctrlr_loss_timeout_sec": 0, 00:25:53.697 "delay_cmd_submit": true, 00:25:53.697 "dhchap_dhgroups": [ 00:25:53.697 "null", 00:25:53.697 "ffdhe2048", 00:25:53.697 "ffdhe3072", 00:25:53.697 "ffdhe4096", 00:25:53.697 "ffdhe6144", 00:25:53.697 "ffdhe8192" 00:25:53.697 ], 00:25:53.697 "dhchap_digests": [ 00:25:53.697 "sha256", 00:25:53.697 "sha384", 00:25:53.697 "sha512" 00:25:53.697 ], 00:25:53.697 "disable_auto_failback": false, 00:25:53.697 "fast_io_fail_timeout_sec": 0, 00:25:53.697 "generate_uuids": false, 00:25:53.697 "high_priority_weight": 0, 00:25:53.697 "io_path_stat": false, 00:25:53.697 "io_queue_requests": 0, 00:25:53.697 "keep_alive_timeout_ms": 10000, 00:25:53.697 "low_priority_weight": 0, 00:25:53.697 "medium_priority_weight": 0, 00:25:53.697 "nvme_adminq_poll_period_us": 10000, 00:25:53.697 "nvme_error_stat": false, 00:25:53.697 "nvme_ioq_poll_period_us": 0, 00:25:53.697 "rdma_cm_event_timeout_ms": 0, 00:25:53.697 "rdma_max_cq_size": 0, 00:25:53.697 "rdma_srq_size": 0, 00:25:53.697 "reconnect_delay_sec": 0, 00:25:53.697 "timeout_admin_us": 0, 00:25:53.697 "timeout_us": 0, 00:25:53.697 "transport_ack_timeout": 0, 00:25:53.697 "transport_retry_count": 4, 00:25:53.697 "transport_tos": 0 00:25:53.697 } 00:25:53.697 }, 00:25:53.697 { 00:25:53.697 "method": "bdev_nvme_set_hotplug", 00:25:53.697 "params": { 00:25:53.697 "enable": false, 00:25:53.697 "period_us": 100000 00:25:53.697 } 00:25:53.697 }, 00:25:53.697 { 00:25:53.697 "method": "bdev_malloc_create", 00:25:53.697 "params": { 00:25:53.697 "block_size": 4096, 00:25:53.697 "name": "malloc0", 00:25:53.697 "num_blocks": 8192, 00:25:53.697 "optimal_io_boundary": 0, 00:25:53.697 "physical_block_size": 4096, 00:25:53.697 "uuid": "d4d77b23-f7d9-40c2-9c12-a4d431c8eb24" 00:25:53.697 } 00:25:53.697 }, 00:25:53.697 { 00:25:53.697 "method": "bdev_wait_for_examine" 00:25:53.697 } 00:25:53.697 ] 00:25:53.697 }, 00:25:53.697 { 00:25:53.697 "subsystem": "nbd", 00:25:53.697 "config": [] 00:25:53.697 }, 00:25:53.697 { 00:25:53.697 "subsystem": "scheduler", 00:25:53.697 "config": [ 00:25:53.697 { 00:25:53.697 "method": "framework_set_scheduler", 00:25:53.697 "params": { 00:25:53.697 "name": "static" 00:25:53.697 } 00:25:53.697 } 00:25:53.697 ] 00:25:53.697 }, 00:25:53.697 { 00:25:53.697 "subsystem": "nvmf", 00:25:53.697 "config": [ 00:25:53.697 { 00:25:53.697 "method": "nvmf_set_config", 00:25:53.697 "params": { 00:25:53.697 "admin_cmd_passthru": { 00:25:53.697 "identify_ctrlr": false 00:25:53.697 }, 00:25:53.697 "discovery_filter": "match_any" 00:25:53.697 } 00:25:53.697 }, 00:25:53.697 { 00:25:53.697 "method": "nvmf_set_max_subsystems", 00:25:53.697 "params": { 00:25:53.697 "max_subsystems": 1024 00:25:53.697 } 00:25:53.697 }, 00:25:53.697 { 00:25:53.697 "method": "nvmf_set_crdt", 00:25:53.697 "params": { 00:25:53.697 "crdt1": 0, 00:25:53.697 "crdt2": 0, 00:25:53.697 "crdt3": 0 00:25:53.697 } 00:25:53.697 }, 00:25:53.697 { 00:25:53.697 "method": "nvmf_create_transport", 00:25:53.697 "params": { 00:25:53.697 "abort_timeout_sec": 1, 00:25:53.697 "ack_timeout": 0, 00:25:53.697 "buf_cache_size": 4294967295, 00:25:53.697 "c2h_success": false, 00:25:53.697 "data_wr_pool_size": 0, 00:25:53.697 "dif_insert_or_strip": false, 00:25:53.697 "in_capsule_data_size": 4096, 00:25:53.697 "io_unit_size": 131072, 00:25:53.697 "max_aq_depth": 128, 00:25:53.697 "max_io_qpairs_per_ctrlr": 127, 00:25:53.697 "max_io_size": 131072, 00:25:53.697 "max_queue_depth": 128, 00:25:53.697 "num_shared_buffers": 511, 00:25:53.697 "sock_priority": 0, 00:25:53.697 "trtype": "TCP", 00:25:53.697 "zcopy": false 00:25:53.697 } 00:25:53.697 }, 00:25:53.697 { 00:25:53.697 "method": "nvmf_create_subsystem", 00:25:53.697 "params": { 00:25:53.697 "allow_any_host": false, 00:25:53.697 "ana_reporting": false, 00:25:53.697 "max_cntlid": 65519, 00:25:53.697 "max_namespaces": 32, 00:25:53.697 "min_cntlid": 1, 00:25:53.697 "model_number": "SPDK bdev Controller", 00:25:53.697 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:53.697 "serial_number": "00000000000000000000" 00:25:53.697 } 00:25:53.697 }, 00:25:53.697 { 00:25:53.697 "method": "nvmf_subsystem_add_host", 00:25:53.697 "params": { 00:25:53.697 "host": "nqn.2016-06.io.spdk:host1", 00:25:53.697 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:53.697 "psk": "key0" 00:25:53.697 } 00:25:53.697 }, 00:25:53.697 { 00:25:53.697 "method": "nvmf_subsystem_add_ns", 00:25:53.697 "params": { 00:25:53.697 "namespace": { 00:25:53.697 "bdev_name": "malloc0", 00:25:53.697 "nguid": "D4D77B23F7D940C29C12A4D431C8EB24", 00:25:53.697 "no_auto_visible": false, 00:25:53.697 "nsid": 1, 00:25:53.697 "uuid": "d4d77b23-f7d9-40c2-9c12-a4d431c8eb24" 00:25:53.697 }, 00:25:53.697 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:25:53.697 } 00:25:53.697 }, 00:25:53.697 { 00:25:53.697 "method": "nvmf_subsystem_add_listener", 00:25:53.697 "params": { 00:25:53.697 "listen_address": { 00:25:53.697 "adrfam": "IPv4", 00:25:53.697 "traddr": "10.0.0.2", 00:25:53.697 "trsvcid": "4420", 00:25:53.697 "trtype": "TCP" 00:25:53.697 }, 00:25:53.697 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:53.697 "secure_channel": true 00:25:53.697 } 00:25:53.697 } 00:25:53.697 ] 00:25:53.697 } 00:25:53.697 ] 00:25:53.697 }' 00:25:53.697 15:44:23 -- target/tls.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:25:53.956 15:44:24 -- target/tls.sh@264 -- # bperfcfg='{ 00:25:53.956 "subsystems": [ 00:25:53.956 { 00:25:53.956 "subsystem": "keyring", 00:25:53.956 "config": [ 00:25:53.956 { 00:25:53.957 "method": "keyring_file_add_key", 00:25:53.957 "params": { 00:25:53.957 "name": "key0", 00:25:53.957 "path": "/tmp/tmp.fBJPVbgFTx" 00:25:53.957 } 00:25:53.957 } 00:25:53.957 ] 00:25:53.957 }, 00:25:53.957 { 00:25:53.957 "subsystem": "iobuf", 00:25:53.957 "config": [ 00:25:53.957 { 00:25:53.957 "method": "iobuf_set_options", 00:25:53.957 "params": { 00:25:53.957 "large_bufsize": 135168, 00:25:53.957 "large_pool_count": 1024, 00:25:53.957 "small_bufsize": 8192, 00:25:53.957 "small_pool_count": 8192 00:25:53.957 } 00:25:53.957 } 00:25:53.957 ] 00:25:53.957 }, 00:25:53.957 { 00:25:53.957 "subsystem": "sock", 00:25:53.957 "config": [ 00:25:53.957 { 00:25:53.957 "method": "sock_impl_set_options", 00:25:53.957 "params": { 00:25:53.957 "enable_ktls": false, 00:25:53.957 "enable_placement_id": 0, 00:25:53.957 "enable_quickack": false, 00:25:53.957 "enable_recv_pipe": true, 00:25:53.957 "enable_zerocopy_send_client": false, 00:25:53.957 "enable_zerocopy_send_server": true, 00:25:53.957 "impl_name": "posix", 00:25:53.957 "recv_buf_size": 2097152, 00:25:53.957 "send_buf_size": 2097152, 00:25:53.957 "tls_version": 0, 00:25:53.957 "zerocopy_threshold": 0 00:25:53.957 } 00:25:53.957 }, 00:25:53.957 { 00:25:53.957 "method": "sock_impl_set_options", 00:25:53.957 "params": { 00:25:53.957 "enable_ktls": false, 00:25:53.957 "enable_placement_id": 0, 00:25:53.957 "enable_quickack": false, 00:25:53.957 "enable_recv_pipe": true, 00:25:53.957 "enable_zerocopy_send_client": false, 00:25:53.957 "enable_zerocopy_send_server": true, 00:25:53.957 "impl_name": "ssl", 00:25:53.957 "recv_buf_size": 4096, 00:25:53.957 "send_buf_size": 4096, 00:25:53.957 "tls_version": 0, 00:25:53.957 "zerocopy_threshold": 0 00:25:53.957 } 00:25:53.957 } 00:25:53.957 ] 00:25:53.957 }, 00:25:53.957 { 00:25:53.957 "subsystem": "vmd", 00:25:53.957 "config": [] 00:25:53.957 }, 00:25:53.957 { 00:25:53.957 "subsystem": "accel", 00:25:53.957 "config": [ 00:25:53.957 { 00:25:53.957 "method": "accel_set_options", 00:25:53.957 "params": { 00:25:53.957 "buf_count": 2048, 00:25:53.957 "large_cache_size": 16, 00:25:53.957 "sequence_count": 2048, 00:25:53.957 "small_cache_size": 128, 00:25:53.957 "task_count": 2048 00:25:53.957 } 00:25:53.957 } 00:25:53.957 ] 00:25:53.957 }, 00:25:53.957 { 00:25:53.957 "subsystem": "bdev", 00:25:53.957 "config": [ 00:25:53.957 { 00:25:53.957 "method": "bdev_set_options", 00:25:53.957 "params": { 00:25:53.957 "bdev_auto_examine": true, 00:25:53.957 "bdev_io_cache_size": 256, 00:25:53.957 "bdev_io_pool_size": 65535, 00:25:53.957 "iobuf_large_cache_size": 16, 00:25:53.957 "iobuf_small_cache_size": 128 00:25:53.957 } 00:25:53.957 }, 00:25:53.957 { 00:25:53.957 "method": "bdev_raid_set_options", 00:25:53.957 "params": { 00:25:53.957 "process_window_size_kb": 1024 00:25:53.957 } 00:25:53.957 }, 00:25:53.957 { 00:25:53.957 "method": "bdev_iscsi_set_options", 00:25:53.957 "params": { 00:25:53.957 "timeout_sec": 30 00:25:53.957 } 00:25:53.957 }, 00:25:53.957 { 00:25:53.957 "method": "bdev_nvme_set_options", 00:25:53.957 "params": { 00:25:53.957 "action_on_timeout": "none", 00:25:53.957 "allow_accel_sequence": false, 00:25:53.957 "arbitration_burst": 0, 00:25:53.957 "bdev_retry_count": 3, 00:25:53.957 "ctrlr_loss_timeout_sec": 0, 00:25:53.957 "delay_cmd_submit": true, 00:25:53.957 "dhchap_dhgroups": [ 00:25:53.957 "null", 00:25:53.957 "ffdhe2048", 00:25:53.957 "ffdhe3072", 00:25:53.957 "ffdhe4096", 00:25:53.957 "ffdhe6144", 00:25:53.957 "ffdhe8192" 00:25:53.957 ], 00:25:53.957 "dhchap_digests": [ 00:25:53.957 "sha256", 00:25:53.957 "sha384", 00:25:53.957 "sha512" 00:25:53.957 ], 00:25:53.957 "disable_auto_failback": false, 00:25:53.957 "fast_io_fail_timeout_sec": 0, 00:25:53.957 "generate_uuids": false, 00:25:53.957 "high_priority_weight": 0, 00:25:53.957 "io_path_stat": false, 00:25:53.957 "io_queue_requests": 512, 00:25:53.957 "keep_alive_timeout_ms": 10000, 00:25:53.957 "low_priority_weight": 0, 00:25:53.957 "medium_priority_weight": 0, 00:25:53.957 "nvme_adminq_poll_period_us": 10000, 00:25:53.957 "nvme_error_stat": false, 00:25:53.957 "nvme_ioq_poll_period_us": 0, 00:25:53.957 "rdma_cm_event_timeout_ms": 0, 00:25:53.957 "rdma_max_cq_size": 0, 00:25:53.957 "rdma_srq_size": 0, 00:25:53.957 "reconnect_delay_sec": 0, 00:25:53.957 "timeout_admin_us": 0, 00:25:53.957 "timeout_us": 0, 00:25:53.957 "transport_ack_timeout": 0, 00:25:53.957 "transport_retry_count": 4, 00:25:53.957 "transport_tos": 0 00:25:53.957 } 00:25:53.957 }, 00:25:53.957 { 00:25:53.957 "method": "bdev_nvme_attach_controller", 00:25:53.957 "params": { 00:25:53.957 "adrfam": "IPv4", 00:25:53.957 "ctrlr_loss_timeout_sec": 0, 00:25:53.957 "ddgst": false, 00:25:53.957 "fast_io_fail_timeout_sec": 0, 00:25:53.957 "hdgst": false, 00:25:53.957 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:53.957 "name": "nvme0", 00:25:53.957 "prchk_guard": false, 00:25:53.957 "prchk_reftag": false, 00:25:53.957 "psk": "key0", 00:25:53.957 "reconnect_delay_sec": 0, 00:25:53.957 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:53.957 "traddr": "10.0.0.2", 00:25:53.957 "trsvcid": "4420", 00:25:53.957 "trtype": "TCP" 00:25:53.957 } 00:25:53.957 }, 00:25:53.957 { 00:25:53.957 "method": "bdev_nvme_set_hotplug", 00:25:53.957 "params": { 00:25:53.957 "enable": false, 00:25:53.957 "period_us": 100000 00:25:53.957 } 00:25:53.957 }, 00:25:53.957 { 00:25:53.957 "method": "bdev_enable_histogram", 00:25:53.957 "params": { 00:25:53.957 "enable": true, 00:25:53.957 "name": "nvme0n1" 00:25:53.957 } 00:25:53.957 }, 00:25:53.957 { 00:25:53.957 "method": "bdev_wait_for_examine" 00:25:53.957 } 00:25:53.957 ] 00:25:53.957 }, 00:25:53.957 { 00:25:53.957 "subsystem": "nbd", 00:25:53.957 "config": [] 00:25:53.957 } 00:25:53.957 ] 00:25:53.957 }' 00:25:53.957 15:44:24 -- target/tls.sh@266 -- # killprocess 78669 00:25:53.957 15:44:24 -- common/autotest_common.sh@936 -- # '[' -z 78669 ']' 00:25:53.957 15:44:24 -- common/autotest_common.sh@940 -- # kill -0 78669 00:25:53.957 15:44:24 -- common/autotest_common.sh@941 -- # uname 00:25:53.957 15:44:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:53.957 15:44:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78669 00:25:53.957 killing process with pid 78669 00:25:53.957 Received shutdown signal, test time was about 1.000000 seconds 00:25:53.957 00:25:53.957 Latency(us) 00:25:53.957 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:53.957 =================================================================================================================== 00:25:53.957 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:53.957 15:44:24 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:53.957 15:44:24 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:53.957 15:44:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78669' 00:25:53.957 15:44:24 -- common/autotest_common.sh@955 -- # kill 78669 00:25:53.957 15:44:24 -- common/autotest_common.sh@960 -- # wait 78669 00:25:54.216 15:44:24 -- target/tls.sh@267 -- # killprocess 78618 00:25:54.216 15:44:24 -- common/autotest_common.sh@936 -- # '[' -z 78618 ']' 00:25:54.216 15:44:24 -- common/autotest_common.sh@940 -- # kill -0 78618 00:25:54.216 15:44:24 -- common/autotest_common.sh@941 -- # uname 00:25:54.216 15:44:24 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:54.216 15:44:24 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78618 00:25:54.474 killing process with pid 78618 00:25:54.474 15:44:24 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:54.474 15:44:24 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:54.474 15:44:24 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78618' 00:25:54.474 15:44:24 -- common/autotest_common.sh@955 -- # kill 78618 00:25:54.474 15:44:24 -- common/autotest_common.sh@960 -- # wait 78618 00:25:54.749 15:44:24 -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:25:54.749 15:44:24 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:54.749 15:44:24 -- target/tls.sh@269 -- # echo '{ 00:25:54.749 "subsystems": [ 00:25:54.749 { 00:25:54.749 "subsystem": "keyring", 00:25:54.749 "config": [ 00:25:54.749 { 00:25:54.749 "method": "keyring_file_add_key", 00:25:54.749 "params": { 00:25:54.749 "name": "key0", 00:25:54.749 "path": "/tmp/tmp.fBJPVbgFTx" 00:25:54.749 } 00:25:54.749 } 00:25:54.749 ] 00:25:54.749 }, 00:25:54.749 { 00:25:54.749 "subsystem": "iobuf", 00:25:54.749 "config": [ 00:25:54.749 { 00:25:54.749 "method": "iobuf_set_options", 00:25:54.749 "params": { 00:25:54.749 "large_bufsize": 135168, 00:25:54.749 "large_pool_count": 1024, 00:25:54.749 "small_bufsize": 8192, 00:25:54.749 "small_pool_count": 8192 00:25:54.749 } 00:25:54.749 } 00:25:54.749 ] 00:25:54.749 }, 00:25:54.749 { 00:25:54.749 "subsystem": "sock", 00:25:54.749 "config": [ 00:25:54.749 { 00:25:54.749 "method": "sock_impl_set_options", 00:25:54.749 "params": { 00:25:54.749 "enable_ktls": false, 00:25:54.749 "enable_placement_id": 0, 00:25:54.749 "enable_quickack": false, 00:25:54.749 "enable_recv_pipe": true, 00:25:54.749 "enable_zerocopy_send_client": false, 00:25:54.749 "enable_zerocopy_send_server": true, 00:25:54.749 "impl_name": "posix", 00:25:54.749 "recv_buf_size": 2097152, 00:25:54.749 "send_buf_size": 2097152, 00:25:54.749 "tls_version": 0, 00:25:54.749 "zerocopy_threshold": 0 00:25:54.749 } 00:25:54.749 }, 00:25:54.749 { 00:25:54.749 "method": "sock_impl_set_options", 00:25:54.749 "params": { 00:25:54.749 "enable_ktls": false, 00:25:54.749 "enable_placement_id": 0, 00:25:54.749 "enable_quickack": false, 00:25:54.749 "enable_recv_pipe": true, 00:25:54.749 "enable_zerocopy_send_client": false, 00:25:54.749 "enable_zerocopy_send_server": true, 00:25:54.749 "impl_name": "ssl", 00:25:54.749 "recv_buf_size": 4096, 00:25:54.749 "send_buf_size": 4096, 00:25:54.749 "tls_version": 0, 00:25:54.749 "zerocopy_threshold": 0 00:25:54.749 } 00:25:54.749 } 00:25:54.749 ] 00:25:54.749 }, 00:25:54.749 { 00:25:54.749 "subsystem": "vmd", 00:25:54.749 "config": [] 00:25:54.749 }, 00:25:54.749 { 00:25:54.749 "subsystem": "accel", 00:25:54.749 "config": [ 00:25:54.749 { 00:25:54.749 "method": "accel_set_options", 00:25:54.749 "params": { 00:25:54.749 "buf_count": 2048, 00:25:54.749 "large_cache_size": 16, 00:25:54.749 "sequence_count": 2048, 00:25:54.749 "small_cache_size": 128, 00:25:54.749 "task_count": 2048 00:25:54.749 } 00:25:54.749 } 00:25:54.749 ] 00:25:54.749 }, 00:25:54.749 { 00:25:54.749 "subsystem": "bdev", 00:25:54.749 "config": [ 00:25:54.749 { 00:25:54.749 "method": "bdev_set_options", 00:25:54.749 "params": { 00:25:54.749 "bdev_auto_examine": true, 00:25:54.749 "bdev_io_cache_size": 256, 00:25:54.749 "bdev_io_pool_size": 65535, 00:25:54.749 "iobuf_large_cache_size": 16, 00:25:54.749 "iobuf_small_cache_size": 128 00:25:54.749 } 00:25:54.749 }, 00:25:54.749 { 00:25:54.749 "method": "bdev_raid_set_options", 00:25:54.749 "params": { 00:25:54.749 "process_window_size_kb": 1024 00:25:54.749 } 00:25:54.749 }, 00:25:54.749 { 00:25:54.749 "method": "bdev_iscsi_set_options", 00:25:54.749 "params": { 00:25:54.749 "timeout_sec": 30 00:25:54.749 } 00:25:54.749 }, 00:25:54.749 { 00:25:54.749 "method": "bdev_nvme_set_options", 00:25:54.749 "params": { 00:25:54.749 "action_on_timeout": "none", 00:25:54.749 "allow_accel_sequence": false, 00:25:54.749 "arbitration_burst": 0, 00:25:54.749 "bdev_retry_count": 3, 00:25:54.749 "ctrlr_loss_timeout_sec": 0, 00:25:54.749 "delay_cmd_submit": true, 00:25:54.749 "dhchap_dhgroups": [ 00:25:54.749 "null", 00:25:54.749 "ffdhe2048", 00:25:54.749 "ffdhe3072", 00:25:54.749 "ffdhe4096", 00:25:54.749 "ffdhe6144", 00:25:54.749 "ffdhe8192" 00:25:54.749 ], 00:25:54.749 "dhchap_digests": [ 00:25:54.749 "sha256", 00:25:54.749 "sha384", 00:25:54.749 "sha512" 00:25:54.749 ], 00:25:54.749 "disable_auto_failback": false, 00:25:54.749 "fast_io_fail_timeout_sec": 0, 00:25:54.749 "generate_uuids": false, 00:25:54.749 "high_priority_weight": 0, 00:25:54.749 "io_path_stat": false, 00:25:54.749 "io_queue_requests": 0, 00:25:54.749 "keep_alive_timeout_ms": 10000, 00:25:54.749 "low_priority_weight": 0, 00:25:54.749 "medium_priority_weight": 0, 00:25:54.749 "nvme_adminq_poll_period_us": 10000, 00:25:54.749 "nvme_error_stat": false, 00:25:54.749 "nvme_ioq_poll_period_us": 0, 00:25:54.749 "rdma_cm_event_timeout_ms": 0, 00:25:54.749 "rdma_max_cq_size": 0, 00:25:54.749 "rdma_srq_size": 0, 00:25:54.749 "reconnect_delay_sec": 0, 00:25:54.749 "timeout_admin_us": 0, 00:25:54.749 "timeout_us": 0, 00:25:54.749 "transport_ack_timeout": 0, 00:25:54.749 "transport_retry_count": 4, 00:25:54.749 "transport_tos": 0 00:25:54.749 } 00:25:54.749 }, 00:25:54.749 { 00:25:54.749 "method": "bdev_nvme_set_hotplug", 00:25:54.749 "params": { 00:25:54.749 "enable": false, 00:25:54.749 "period_us": 100000 00:25:54.749 } 00:25:54.749 }, 00:25:54.749 { 00:25:54.749 "method": "bdev_malloc_create", 00:25:54.749 "params": { 00:25:54.749 "block_size": 4096, 00:25:54.749 "name": "malloc0", 00:25:54.749 "num_blocks": 8192, 00:25:54.749 "optimal_io_boundary": 0, 00:25:54.749 "physical_block_size": 4096, 00:25:54.749 "uuid": "d4d77b23-f7d9-40c2-9c12-a4d431c8eb24" 00:25:54.749 } 00:25:54.749 }, 00:25:54.749 { 00:25:54.749 "method": "bdev_wait_for_examine" 00:25:54.749 } 00:25:54.749 ] 00:25:54.749 }, 00:25:54.749 { 00:25:54.749 "subsystem": "nbd", 00:25:54.749 "config": [] 00:25:54.749 }, 00:25:54.749 { 00:25:54.749 "subsystem": "scheduler", 00:25:54.749 "config": [ 00:25:54.749 { 00:25:54.749 "method": "framework_set_scheduler", 00:25:54.749 "params": { 00:25:54.749 "name": "static" 00:25:54.749 } 00:25:54.749 } 00:25:54.749 ] 00:25:54.749 }, 00:25:54.749 { 00:25:54.749 "subsystem": "nvmf", 00:25:54.749 "config": [ 00:25:54.749 { 00:25:54.749 "method": "nvmf_set_config", 00:25:54.749 "params": { 00:25:54.749 "admin_cmd_passthru": { 00:25:54.749 "identify_ctrlr": false 00:25:54.749 }, 00:25:54.749 "discovery_filter": "match_any" 00:25:54.749 } 00:25:54.749 }, 00:25:54.749 { 00:25:54.749 "method": "nvmf_set_max_subsystems", 00:25:54.749 "params": { 00:25:54.749 "max_subsystems": 1024 00:25:54.749 } 00:25:54.749 }, 00:25:54.749 { 00:25:54.749 "method": "nvmf_set_crdt", 00:25:54.749 "params": { 00:25:54.749 "crdt1": 0, 00:25:54.749 "crdt2": 0, 00:25:54.749 "crdt3": 0 00:25:54.749 } 00:25:54.749 }, 00:25:54.749 { 00:25:54.749 "method": "nvmf_create_transport", 00:25:54.749 "params": { 00:25:54.749 "abort_timeout_sec": 1, 00:25:54.749 "ack_timeout": 0, 00:25:54.749 "buf_cache_size": 4294967295, 00:25:54.749 "c2h_success": false, 00:25:54.749 "data_wr_pool_size": 0, 00:25:54.749 "dif_insert_or_strip": false, 00:25:54.749 "in_capsule_data_size": 4096, 00:25:54.749 "io_unit_size": 131072, 00:25:54.749 "max_aq_depth": 128, 00:25:54.749 "max_io_qpairs_per_ctrlr": 127, 00:25:54.749 "max_io_size": 131072, 00:25:54.749 "max_queue_depth": 128, 00:25:54.749 "num_shared_buffers": 511, 00:25:54.749 "sock_priority": 0, 00:25:54.749 "trtype": "TCP", 00:25:54.749 "zcopy": false 00:25:54.749 } 00:25:54.750 }, 00:25:54.750 { 00:25:54.750 "method": "nvmf_create_subsystem", 00:25:54.750 "params": { 00:25:54.750 "allow_any_host": false, 00:25:54.750 "ana_reporting": false, 00:25:54.750 "max_cntlid": 65519, 00:25:54.750 "max_namespaces": 32, 00:25:54.750 "min_cntlid": 1, 00:25:54.750 "model_number": "SPDK bdev Controller", 00:25:54.750 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:54.750 "serial_number": "00000000000000000000" 00:25:54.750 } 00:25:54.750 }, 00:25:54.750 { 00:25:54.750 "method": "nvmf_subsystem_add_host", 00:25:54.750 "params": { 00:25:54.750 "host": "nqn.2016-06.io.spdk:host1", 00:25:54.750 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:54.750 "psk": "key0" 00:25:54.750 } 00:25:54.750 }, 00:25:54.750 { 00:25:54.750 "method": "nvmf_subsystem_add_ns", 00:25:54.750 "params": { 00:25:54.750 "namespace": { 00:25:54.750 "bdev_name": "malloc0", 00:25:54.750 "nguid": "D4D77B23F7D940C29C12A4D431C8EB24", 00:25:54.750 "no_auto_visible": false, 00:25:54.750 "nsid": 1, 00:25:54.750 "uuid": "d4d77b23-f7d9-40c2-9c12-a4d431c8eb24" 00:25:54.750 }, 00:25:54.750 "nqn": "nqn.2016-06.io.spdk:cnode1" 00:25:54.750 } 00:25:54.750 }, 00:25:54.750 { 00:25:54.750 "method": "nvmf_subsystem_add_listener", 00:25:54.750 "params": { 00:25:54.750 "listen_address": { 00:25:54.750 "adrfam": "IPv4", 00:25:54.750 "traddr": "10.0.0.2", 00:25:54.750 "trsvcid": "4420", 00:25:54.750 "trtype": "TCP" 00:25:54.750 }, 00:25:54.750 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:54.750 "secure_channel": true 00:25:54.750 } 00:25:54.750 } 00:25:54.750 ] 00:25:54.750 } 00:25:54.750 ] 00:25:54.750 }' 00:25:54.750 15:44:24 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:54.750 15:44:24 -- common/autotest_common.sh@10 -- # set +x 00:25:54.750 15:44:24 -- nvmf/common.sh@470 -- # nvmfpid=78764 00:25:54.750 15:44:24 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:25:54.750 15:44:24 -- nvmf/common.sh@471 -- # waitforlisten 78764 00:25:54.750 15:44:24 -- common/autotest_common.sh@817 -- # '[' -z 78764 ']' 00:25:54.750 15:44:24 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:54.750 15:44:24 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:54.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:54.750 15:44:24 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:54.750 15:44:24 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:54.750 15:44:24 -- common/autotest_common.sh@10 -- # set +x 00:25:54.750 [2024-04-26 15:44:24.832804] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:25:54.750 [2024-04-26 15:44:24.832906] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:54.750 [2024-04-26 15:44:24.968588] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:55.017 [2024-04-26 15:44:25.092426] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:55.018 [2024-04-26 15:44:25.092492] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:55.018 [2024-04-26 15:44:25.092505] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:55.018 [2024-04-26 15:44:25.092514] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:55.018 [2024-04-26 15:44:25.092522] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:55.018 [2024-04-26 15:44:25.092633] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:25:55.275 [2024-04-26 15:44:25.331317] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:55.275 [2024-04-26 15:44:25.363245] tcp.c: 926:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:55.275 [2024-04-26 15:44:25.363489] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:55.840 15:44:25 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:55.840 15:44:25 -- common/autotest_common.sh@850 -- # return 0 00:25:55.840 15:44:25 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:25:55.840 15:44:25 -- common/autotest_common.sh@716 -- # xtrace_disable 00:25:55.840 15:44:25 -- common/autotest_common.sh@10 -- # set +x 00:25:55.840 15:44:25 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:55.840 15:44:25 -- target/tls.sh@272 -- # bdevperf_pid=78808 00:25:55.840 15:44:25 -- target/tls.sh@273 -- # waitforlisten 78808 /var/tmp/bdevperf.sock 00:25:55.840 15:44:25 -- common/autotest_common.sh@817 -- # '[' -z 78808 ']' 00:25:55.840 15:44:25 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:25:55.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:25:55.840 15:44:25 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:55.840 15:44:25 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:25:55.840 15:44:25 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:55.840 15:44:25 -- target/tls.sh@270 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:25:55.840 15:44:25 -- common/autotest_common.sh@10 -- # set +x 00:25:55.840 15:44:25 -- target/tls.sh@270 -- # echo '{ 00:25:55.840 "subsystems": [ 00:25:55.840 { 00:25:55.840 "subsystem": "keyring", 00:25:55.840 "config": [ 00:25:55.840 { 00:25:55.840 "method": "keyring_file_add_key", 00:25:55.840 "params": { 00:25:55.840 "name": "key0", 00:25:55.840 "path": "/tmp/tmp.fBJPVbgFTx" 00:25:55.840 } 00:25:55.840 } 00:25:55.840 ] 00:25:55.840 }, 00:25:55.840 { 00:25:55.840 "subsystem": "iobuf", 00:25:55.840 "config": [ 00:25:55.840 { 00:25:55.840 "method": "iobuf_set_options", 00:25:55.840 "params": { 00:25:55.840 "large_bufsize": 135168, 00:25:55.840 "large_pool_count": 1024, 00:25:55.840 "small_bufsize": 8192, 00:25:55.840 "small_pool_count": 8192 00:25:55.840 } 00:25:55.840 } 00:25:55.840 ] 00:25:55.840 }, 00:25:55.840 { 00:25:55.840 "subsystem": "sock", 00:25:55.840 "config": [ 00:25:55.840 { 00:25:55.840 "method": "sock_impl_set_options", 00:25:55.840 "params": { 00:25:55.840 "enable_ktls": false, 00:25:55.840 "enable_placement_id": 0, 00:25:55.840 "enable_quickack": false, 00:25:55.840 "enable_recv_pipe": true, 00:25:55.840 "enable_zerocopy_send_client": false, 00:25:55.840 "enable_zerocopy_send_server": true, 00:25:55.840 "impl_name": "posix", 00:25:55.840 "recv_buf_size": 2097152, 00:25:55.840 "send_buf_size": 2097152, 00:25:55.840 "tls_version": 0, 00:25:55.840 "zerocopy_threshold": 0 00:25:55.840 } 00:25:55.840 }, 00:25:55.840 { 00:25:55.840 "method": "sock_impl_set_options", 00:25:55.840 "params": { 00:25:55.840 "enable_ktls": false, 00:25:55.840 "enable_placement_id": 0, 00:25:55.840 "enable_quickack": false, 00:25:55.840 "enable_recv_pipe": true, 00:25:55.840 "enable_zerocopy_send_client": false, 00:25:55.840 "enable_zerocopy_send_server": true, 00:25:55.840 "impl_name": "ssl", 00:25:55.840 "recv_buf_size": 4096, 00:25:55.840 "send_buf_size": 4096, 00:25:55.840 "tls_version": 0, 00:25:55.840 "zerocopy_threshold": 0 00:25:55.840 } 00:25:55.840 } 00:25:55.840 ] 00:25:55.840 }, 00:25:55.841 { 00:25:55.841 "subsystem": "vmd", 00:25:55.841 "config": [] 00:25:55.841 }, 00:25:55.841 { 00:25:55.841 "subsystem": "accel", 00:25:55.841 "config": [ 00:25:55.841 { 00:25:55.841 "method": "accel_set_options", 00:25:55.841 "params": { 00:25:55.841 "buf_count": 2048, 00:25:55.841 "large_cache_size": 16, 00:25:55.841 "sequence_count": 2048, 00:25:55.841 "small_cache_size": 128, 00:25:55.841 "task_count": 2048 00:25:55.841 } 00:25:55.841 } 00:25:55.841 ] 00:25:55.841 }, 00:25:55.841 { 00:25:55.841 "subsystem": "bdev", 00:25:55.841 "config": [ 00:25:55.841 { 00:25:55.841 "method": "bdev_set_options", 00:25:55.841 "params": { 00:25:55.841 "bdev_auto_examine": true, 00:25:55.841 "bdev_io_cache_size": 256, 00:25:55.841 "bdev_io_pool_size": 65535, 00:25:55.841 "iobuf_large_cache_size": 16, 00:25:55.841 "iobuf_small_cache_size": 128 00:25:55.841 } 00:25:55.841 }, 00:25:55.841 { 00:25:55.841 "method": "bdev_raid_set_options", 00:25:55.841 "params": { 00:25:55.841 "process_window_size_kb": 1024 00:25:55.841 } 00:25:55.841 }, 00:25:55.841 { 00:25:55.841 "method": "bdev_iscsi_set_options", 00:25:55.841 "params": { 00:25:55.841 "timeout_sec": 30 00:25:55.841 } 00:25:55.841 }, 00:25:55.841 { 00:25:55.841 "method": "bdev_nvme_set_options", 00:25:55.841 "params": { 00:25:55.841 "action_on_timeout": "none", 00:25:55.841 "allow_accel_sequence": false, 00:25:55.841 "arbitration_burst": 0, 00:25:55.841 "bdev_retry_count": 3, 00:25:55.841 "ctrlr_loss_timeout_sec": 0, 00:25:55.841 "delay_cmd_submit": true, 00:25:55.841 "dhchap_dhgroups": [ 00:25:55.841 "null", 00:25:55.841 "ffdhe2048", 00:25:55.841 "ffdhe3072", 00:25:55.841 "ffdhe4096", 00:25:55.841 "ffdhe6144", 00:25:55.841 "ffdhe8192" 00:25:55.841 ], 00:25:55.841 "dhchap_digests": [ 00:25:55.841 "sha256", 00:25:55.841 "sha384", 00:25:55.841 "sha512" 00:25:55.841 ], 00:25:55.841 "disable_auto_failback": false, 00:25:55.841 "fast_io_fail_timeout_sec": 0, 00:25:55.841 "generate_uuids": false, 00:25:55.841 "high_priority_weight": 0, 00:25:55.841 "io_path_stat": false, 00:25:55.841 "io_queue_requests": 512, 00:25:55.841 "keep_alive_timeout_ms": 10000, 00:25:55.841 "low_priority_weight": 0, 00:25:55.841 "medium_priority_weight": 0, 00:25:55.841 "nvme_adminq_poll_period_us": 10000, 00:25:55.841 "nvme_error_stat": false, 00:25:55.841 "nvme_ioq_poll_period_us": 0, 00:25:55.841 "rdma_cm_event_timeout_ms": 0, 00:25:55.841 "rdma_max_cq_size": 0, 00:25:55.841 "rdma_srq_size": 0, 00:25:55.841 "reconnect_delay_sec": 0, 00:25:55.841 "timeout_admin_us": 0, 00:25:55.841 "timeout_us": 0, 00:25:55.841 "transport_ack_timeout": 0, 00:25:55.841 "transport_retry_count": 4, 00:25:55.841 "transport_tos": 0 00:25:55.841 } 00:25:55.841 }, 00:25:55.841 { 00:25:55.841 "method": "bdev_nvme_attach_controller", 00:25:55.841 "params": { 00:25:55.841 "adrfam": "IPv4", 00:25:55.841 "ctrlr_loss_timeout_sec": 0, 00:25:55.841 "ddgst": false, 00:25:55.841 "fast_io_fail_timeout_sec": 0, 00:25:55.841 "hdgst": false, 00:25:55.841 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:55.841 "name": "nvme0", 00:25:55.841 "prchk_guard": false, 00:25:55.841 "prchk_reftag": false, 00:25:55.841 "psk": "key0", 00:25:55.841 "reconnect_delay_sec": 0, 00:25:55.841 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:55.841 "traddr": "10.0.0.2", 00:25:55.841 "trsvcid": "4420", 00:25:55.841 "trtype": "TCP" 00:25:55.841 } 00:25:55.841 }, 00:25:55.841 { 00:25:55.841 "method": "bdev_nvme_set_hotplug", 00:25:55.841 "params": { 00:25:55.841 "enable": false, 00:25:55.841 "period_us": 100000 00:25:55.841 } 00:25:55.841 }, 00:25:55.841 { 00:25:55.841 "method": "bdev_enable_histogram", 00:25:55.841 "params": { 00:25:55.841 "enable": true, 00:25:55.841 "name": "nvme0n1" 00:25:55.841 } 00:25:55.841 }, 00:25:55.841 { 00:25:55.841 "method": "bdev_wait_for_examine" 00:25:55.841 } 00:25:55.841 ] 00:25:55.841 }, 00:25:55.841 { 00:25:55.841 "subsystem": "nbd", 00:25:55.841 "config": [] 00:25:55.841 } 00:25:55.841 ] 00:25:55.841 }' 00:25:55.841 [2024-04-26 15:44:25.936626] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:25:55.841 [2024-04-26 15:44:25.936716] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78808 ] 00:25:55.841 [2024-04-26 15:44:26.073505] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:56.099 [2024-04-26 15:44:26.201898] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:25:56.099 [2024-04-26 15:44:26.372477] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:56.666 15:44:26 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:25:56.666 15:44:26 -- common/autotest_common.sh@850 -- # return 0 00:25:56.666 15:44:26 -- target/tls.sh@275 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:25:56.666 15:44:26 -- target/tls.sh@275 -- # jq -r '.[].name' 00:25:56.924 15:44:27 -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.924 15:44:27 -- target/tls.sh@276 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:25:57.183 Running I/O for 1 seconds... 00:25:58.119 00:25:58.119 Latency(us) 00:25:58.119 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:58.119 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:25:58.119 Verification LBA range: start 0x0 length 0x2000 00:25:58.119 nvme0n1 : 1.02 3922.77 15.32 0.00 0.00 32170.90 4468.36 23831.27 00:25:58.119 =================================================================================================================== 00:25:58.119 Total : 3922.77 15.32 0.00 0.00 32170.90 4468.36 23831.27 00:25:58.119 0 00:25:58.119 15:44:28 -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:25:58.119 15:44:28 -- target/tls.sh@279 -- # cleanup 00:25:58.119 15:44:28 -- target/tls.sh@15 -- # process_shm --id 0 00:25:58.119 15:44:28 -- common/autotest_common.sh@794 -- # type=--id 00:25:58.119 15:44:28 -- common/autotest_common.sh@795 -- # id=0 00:25:58.119 15:44:28 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:25:58.119 15:44:28 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:25:58.119 15:44:28 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:25:58.119 15:44:28 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:25:58.119 15:44:28 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:25:58.119 15:44:28 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:25:58.119 nvmf_trace.0 00:25:58.119 15:44:28 -- common/autotest_common.sh@809 -- # return 0 00:25:58.119 15:44:28 -- target/tls.sh@16 -- # killprocess 78808 00:25:58.119 15:44:28 -- common/autotest_common.sh@936 -- # '[' -z 78808 ']' 00:25:58.119 15:44:28 -- common/autotest_common.sh@940 -- # kill -0 78808 00:25:58.119 15:44:28 -- common/autotest_common.sh@941 -- # uname 00:25:58.119 15:44:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:58.377 15:44:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78808 00:25:58.377 15:44:28 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:25:58.377 15:44:28 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:25:58.377 killing process with pid 78808 00:25:58.377 15:44:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78808' 00:25:58.377 15:44:28 -- common/autotest_common.sh@955 -- # kill 78808 00:25:58.377 Received shutdown signal, test time was about 1.000000 seconds 00:25:58.377 00:25:58.377 Latency(us) 00:25:58.377 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:58.377 =================================================================================================================== 00:25:58.377 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:25:58.377 15:44:28 -- common/autotest_common.sh@960 -- # wait 78808 00:25:58.635 15:44:28 -- target/tls.sh@17 -- # nvmftestfini 00:25:58.635 15:44:28 -- nvmf/common.sh@477 -- # nvmfcleanup 00:25:58.635 15:44:28 -- nvmf/common.sh@117 -- # sync 00:25:58.635 15:44:28 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:58.635 15:44:28 -- nvmf/common.sh@120 -- # set +e 00:25:58.635 15:44:28 -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:58.635 15:44:28 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:58.635 rmmod nvme_tcp 00:25:58.635 rmmod nvme_fabrics 00:25:58.635 rmmod nvme_keyring 00:25:58.635 15:44:28 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:58.635 15:44:28 -- nvmf/common.sh@124 -- # set -e 00:25:58.635 15:44:28 -- nvmf/common.sh@125 -- # return 0 00:25:58.635 15:44:28 -- nvmf/common.sh@478 -- # '[' -n 78764 ']' 00:25:58.635 15:44:28 -- nvmf/common.sh@479 -- # killprocess 78764 00:25:58.635 15:44:28 -- common/autotest_common.sh@936 -- # '[' -z 78764 ']' 00:25:58.635 15:44:28 -- common/autotest_common.sh@940 -- # kill -0 78764 00:25:58.635 15:44:28 -- common/autotest_common.sh@941 -- # uname 00:25:58.635 15:44:28 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:25:58.635 15:44:28 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 78764 00:25:58.635 15:44:28 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:25:58.635 15:44:28 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:25:58.635 15:44:28 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 78764' 00:25:58.635 killing process with pid 78764 00:25:58.635 15:44:28 -- common/autotest_common.sh@955 -- # kill 78764 00:25:58.635 15:44:28 -- common/autotest_common.sh@960 -- # wait 78764 00:25:58.893 15:44:29 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:25:58.893 15:44:29 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:25:58.893 15:44:29 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:25:58.893 15:44:29 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:58.893 15:44:29 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:58.893 15:44:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:58.893 15:44:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:58.893 15:44:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:58.893 15:44:29 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:25:58.893 15:44:29 -- target/tls.sh@18 -- # rm -f /tmp/tmp.FZ0eNWGbVp /tmp/tmp.6RSIeK5tPi /tmp/tmp.fBJPVbgFTx 00:25:58.893 ************************************ 00:25:58.893 END TEST nvmf_tls 00:25:58.893 ************************************ 00:25:58.893 00:25:58.893 real 1m27.904s 00:25:58.893 user 2m19.604s 00:25:58.893 sys 0m28.710s 00:25:58.893 15:44:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:25:58.893 15:44:29 -- common/autotest_common.sh@10 -- # set +x 00:25:58.893 15:44:29 -- nvmf/nvmf.sh@61 -- # run_test nvmf_fips /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:25:58.893 15:44:29 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:25:58.893 15:44:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:25:58.893 15:44:29 -- common/autotest_common.sh@10 -- # set +x 00:25:59.151 ************************************ 00:25:59.151 START TEST nvmf_fips 00:25:59.151 ************************************ 00:25:59.151 15:44:29 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:25:59.151 * Looking for test storage... 00:25:59.151 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/fips 00:25:59.151 15:44:29 -- fips/fips.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:25:59.151 15:44:29 -- nvmf/common.sh@7 -- # uname -s 00:25:59.151 15:44:29 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:59.151 15:44:29 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:59.151 15:44:29 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:59.151 15:44:29 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:59.151 15:44:29 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:59.151 15:44:29 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:59.151 15:44:29 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:59.151 15:44:29 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:59.151 15:44:29 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:59.151 15:44:29 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:59.151 15:44:29 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:25:59.151 15:44:29 -- nvmf/common.sh@18 -- # NVME_HOSTID=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:25:59.151 15:44:29 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:59.151 15:44:29 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:59.151 15:44:29 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:25:59.151 15:44:29 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:59.151 15:44:29 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:25:59.151 15:44:29 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:59.151 15:44:29 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:59.151 15:44:29 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:59.151 15:44:29 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.151 15:44:29 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.151 15:44:29 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.151 15:44:29 -- paths/export.sh@5 -- # export PATH 00:25:59.151 15:44:29 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.151 15:44:29 -- nvmf/common.sh@47 -- # : 0 00:25:59.151 15:44:29 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:59.151 15:44:29 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:59.151 15:44:29 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:59.151 15:44:29 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:59.151 15:44:29 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:59.151 15:44:29 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:59.151 15:44:29 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:59.151 15:44:29 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:59.151 15:44:29 -- fips/fips.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:59.151 15:44:29 -- fips/fips.sh@89 -- # check_openssl_version 00:25:59.151 15:44:29 -- fips/fips.sh@83 -- # local target=3.0.0 00:25:59.151 15:44:29 -- fips/fips.sh@85 -- # openssl version 00:25:59.151 15:44:29 -- fips/fips.sh@85 -- # awk '{print $2}' 00:25:59.151 15:44:29 -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:25:59.151 15:44:29 -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:25:59.151 15:44:29 -- scripts/common.sh@330 -- # local ver1 ver1_l 00:25:59.151 15:44:29 -- scripts/common.sh@331 -- # local ver2 ver2_l 00:25:59.151 15:44:29 -- scripts/common.sh@333 -- # IFS=.-: 00:25:59.151 15:44:29 -- scripts/common.sh@333 -- # read -ra ver1 00:25:59.151 15:44:29 -- scripts/common.sh@334 -- # IFS=.-: 00:25:59.151 15:44:29 -- scripts/common.sh@334 -- # read -ra ver2 00:25:59.151 15:44:29 -- scripts/common.sh@335 -- # local 'op=>=' 00:25:59.151 15:44:29 -- scripts/common.sh@337 -- # ver1_l=3 00:25:59.151 15:44:29 -- scripts/common.sh@338 -- # ver2_l=3 00:25:59.151 15:44:29 -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:25:59.151 15:44:29 -- scripts/common.sh@341 -- # case "$op" in 00:25:59.151 15:44:29 -- scripts/common.sh@345 -- # : 1 00:25:59.151 15:44:29 -- scripts/common.sh@361 -- # (( v = 0 )) 00:25:59.151 15:44:29 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:59.151 15:44:29 -- scripts/common.sh@362 -- # decimal 3 00:25:59.151 15:44:29 -- scripts/common.sh@350 -- # local d=3 00:25:59.151 15:44:29 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:25:59.151 15:44:29 -- scripts/common.sh@352 -- # echo 3 00:25:59.151 15:44:29 -- scripts/common.sh@362 -- # ver1[v]=3 00:25:59.151 15:44:29 -- scripts/common.sh@363 -- # decimal 3 00:25:59.151 15:44:29 -- scripts/common.sh@350 -- # local d=3 00:25:59.151 15:44:29 -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:25:59.151 15:44:29 -- scripts/common.sh@352 -- # echo 3 00:25:59.151 15:44:29 -- scripts/common.sh@363 -- # ver2[v]=3 00:25:59.151 15:44:29 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:25:59.151 15:44:29 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:25:59.151 15:44:29 -- scripts/common.sh@361 -- # (( v++ )) 00:25:59.151 15:44:29 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:59.151 15:44:29 -- scripts/common.sh@362 -- # decimal 0 00:25:59.151 15:44:29 -- scripts/common.sh@350 -- # local d=0 00:25:59.151 15:44:29 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:25:59.151 15:44:29 -- scripts/common.sh@352 -- # echo 0 00:25:59.151 15:44:29 -- scripts/common.sh@362 -- # ver1[v]=0 00:25:59.151 15:44:29 -- scripts/common.sh@363 -- # decimal 0 00:25:59.151 15:44:29 -- scripts/common.sh@350 -- # local d=0 00:25:59.151 15:44:29 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:25:59.151 15:44:29 -- scripts/common.sh@352 -- # echo 0 00:25:59.151 15:44:29 -- scripts/common.sh@363 -- # ver2[v]=0 00:25:59.151 15:44:29 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:25:59.151 15:44:29 -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:25:59.151 15:44:29 -- scripts/common.sh@361 -- # (( v++ )) 00:25:59.151 15:44:29 -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:59.151 15:44:29 -- scripts/common.sh@362 -- # decimal 9 00:25:59.151 15:44:29 -- scripts/common.sh@350 -- # local d=9 00:25:59.151 15:44:29 -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:25:59.151 15:44:29 -- scripts/common.sh@352 -- # echo 9 00:25:59.151 15:44:29 -- scripts/common.sh@362 -- # ver1[v]=9 00:25:59.151 15:44:29 -- scripts/common.sh@363 -- # decimal 0 00:25:59.152 15:44:29 -- scripts/common.sh@350 -- # local d=0 00:25:59.152 15:44:29 -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:25:59.152 15:44:29 -- scripts/common.sh@352 -- # echo 0 00:25:59.152 15:44:29 -- scripts/common.sh@363 -- # ver2[v]=0 00:25:59.152 15:44:29 -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:25:59.152 15:44:29 -- scripts/common.sh@364 -- # return 0 00:25:59.152 15:44:29 -- fips/fips.sh@95 -- # openssl info -modulesdir 00:25:59.152 15:44:29 -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:25:59.152 15:44:29 -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:25:59.152 15:44:29 -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:25:59.152 15:44:29 -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:25:59.152 15:44:29 -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:25:59.152 15:44:29 -- fips/fips.sh@104 -- # callback=build_openssl_config 00:25:59.152 15:44:29 -- fips/fips.sh@113 -- # build_openssl_config 00:25:59.152 15:44:29 -- fips/fips.sh@37 -- # cat 00:25:59.152 15:44:29 -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:25:59.152 15:44:29 -- fips/fips.sh@58 -- # cat - 00:25:59.152 15:44:29 -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:25:59.152 15:44:29 -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:25:59.152 15:44:29 -- fips/fips.sh@116 -- # mapfile -t providers 00:25:59.152 15:44:29 -- fips/fips.sh@116 -- # openssl list -providers 00:25:59.152 15:44:29 -- fips/fips.sh@116 -- # grep name 00:25:59.152 15:44:29 -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:25:59.152 15:44:29 -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:25:59.152 15:44:29 -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:25:59.410 15:44:29 -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:25:59.410 15:44:29 -- fips/fips.sh@127 -- # : 00:25:59.410 15:44:29 -- common/autotest_common.sh@638 -- # local es=0 00:25:59.410 15:44:29 -- common/autotest_common.sh@640 -- # valid_exec_arg openssl md5 /dev/fd/62 00:25:59.410 15:44:29 -- common/autotest_common.sh@626 -- # local arg=openssl 00:25:59.410 15:44:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:59.410 15:44:29 -- common/autotest_common.sh@630 -- # type -t openssl 00:25:59.410 15:44:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:59.410 15:44:29 -- common/autotest_common.sh@632 -- # type -P openssl 00:25:59.410 15:44:29 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:25:59.410 15:44:29 -- common/autotest_common.sh@632 -- # arg=/usr/bin/openssl 00:25:59.410 15:44:29 -- common/autotest_common.sh@632 -- # [[ -x /usr/bin/openssl ]] 00:25:59.410 15:44:29 -- common/autotest_common.sh@641 -- # openssl md5 /dev/fd/62 00:25:59.410 Error setting digest 00:25:59.410 00C2AD4E7E7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:25:59.410 00C2AD4E7E7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:25:59.410 15:44:29 -- common/autotest_common.sh@641 -- # es=1 00:25:59.410 15:44:29 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:25:59.410 15:44:29 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:25:59.410 15:44:29 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:25:59.410 15:44:29 -- fips/fips.sh@130 -- # nvmftestinit 00:25:59.410 15:44:29 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:25:59.410 15:44:29 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:59.410 15:44:29 -- nvmf/common.sh@437 -- # prepare_net_devs 00:25:59.410 15:44:29 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:25:59.410 15:44:29 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:25:59.410 15:44:29 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:59.410 15:44:29 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:59.410 15:44:29 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:59.410 15:44:29 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:25:59.410 15:44:29 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:25:59.410 15:44:29 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:25:59.410 15:44:29 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:25:59.410 15:44:29 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:25:59.410 15:44:29 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:25:59.410 15:44:29 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:59.410 15:44:29 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:59.410 15:44:29 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:25:59.410 15:44:29 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:25:59.410 15:44:29 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:25:59.410 15:44:29 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:25:59.410 15:44:29 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:25:59.410 15:44:29 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:59.410 15:44:29 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:25:59.410 15:44:29 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:25:59.410 15:44:29 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:25:59.410 15:44:29 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:25:59.410 15:44:29 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:25:59.410 15:44:29 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:25:59.410 Cannot find device "nvmf_tgt_br" 00:25:59.410 15:44:29 -- nvmf/common.sh@155 -- # true 00:25:59.410 15:44:29 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:25:59.410 Cannot find device "nvmf_tgt_br2" 00:25:59.410 15:44:29 -- nvmf/common.sh@156 -- # true 00:25:59.410 15:44:29 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:25:59.410 15:44:29 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:25:59.410 Cannot find device "nvmf_tgt_br" 00:25:59.410 15:44:29 -- nvmf/common.sh@158 -- # true 00:25:59.410 15:44:29 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:25:59.410 Cannot find device "nvmf_tgt_br2" 00:25:59.410 15:44:29 -- nvmf/common.sh@159 -- # true 00:25:59.410 15:44:29 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:25:59.411 15:44:29 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:25:59.411 15:44:29 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:25:59.411 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:59.411 15:44:29 -- nvmf/common.sh@162 -- # true 00:25:59.411 15:44:29 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:25:59.411 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:25:59.411 15:44:29 -- nvmf/common.sh@163 -- # true 00:25:59.411 15:44:29 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:25:59.411 15:44:29 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:25:59.411 15:44:29 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:25:59.411 15:44:29 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:25:59.411 15:44:29 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:25:59.411 15:44:29 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:25:59.669 15:44:29 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:25:59.669 15:44:29 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:25:59.669 15:44:29 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:25:59.669 15:44:29 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:25:59.669 15:44:29 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:25:59.669 15:44:29 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:25:59.669 15:44:29 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:25:59.669 15:44:29 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:25:59.669 15:44:29 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:25:59.669 15:44:29 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:25:59.669 15:44:29 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:25:59.669 15:44:29 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:25:59.669 15:44:29 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:25:59.669 15:44:29 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:25:59.669 15:44:29 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:25:59.669 15:44:29 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:25:59.669 15:44:29 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:25:59.669 15:44:29 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:25:59.669 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:59.669 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.078 ms 00:25:59.669 00:25:59.669 --- 10.0.0.2 ping statistics --- 00:25:59.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:59.669 rtt min/avg/max/mdev = 0.078/0.078/0.078/0.000 ms 00:25:59.669 15:44:29 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:25:59.669 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:25:59.669 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.097 ms 00:25:59.669 00:25:59.669 --- 10.0.0.3 ping statistics --- 00:25:59.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:59.669 rtt min/avg/max/mdev = 0.097/0.097/0.097/0.000 ms 00:25:59.669 15:44:29 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:25:59.669 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:59.669 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:25:59.669 00:25:59.669 --- 10.0.0.1 ping statistics --- 00:25:59.669 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:59.669 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:25:59.669 15:44:29 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:59.669 15:44:29 -- nvmf/common.sh@422 -- # return 0 00:25:59.669 15:44:29 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:25:59.669 15:44:29 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:59.670 15:44:29 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:25:59.670 15:44:29 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:25:59.670 15:44:29 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:59.670 15:44:29 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:25:59.670 15:44:29 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:25:59.670 15:44:29 -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:25:59.670 15:44:29 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:25:59.670 15:44:29 -- common/autotest_common.sh@710 -- # xtrace_disable 00:25:59.670 15:44:29 -- common/autotest_common.sh@10 -- # set +x 00:25:59.670 15:44:29 -- nvmf/common.sh@470 -- # nvmfpid=79097 00:25:59.670 15:44:29 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:25:59.670 15:44:29 -- nvmf/common.sh@471 -- # waitforlisten 79097 00:25:59.670 15:44:29 -- common/autotest_common.sh@817 -- # '[' -z 79097 ']' 00:25:59.670 15:44:29 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:59.670 15:44:29 -- common/autotest_common.sh@822 -- # local max_retries=100 00:25:59.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:59.670 15:44:29 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:59.670 15:44:29 -- common/autotest_common.sh@826 -- # xtrace_disable 00:25:59.670 15:44:29 -- common/autotest_common.sh@10 -- # set +x 00:25:59.928 [2024-04-26 15:44:29.970756] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:25:59.928 [2024-04-26 15:44:29.970857] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:59.928 [2024-04-26 15:44:30.107057] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:00.187 [2024-04-26 15:44:30.224996] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:00.187 [2024-04-26 15:44:30.225064] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:00.187 [2024-04-26 15:44:30.225076] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:00.187 [2024-04-26 15:44:30.225085] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:00.187 [2024-04-26 15:44:30.225093] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:00.187 [2024-04-26 15:44:30.225131] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:00.756 15:44:30 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:00.756 15:44:30 -- common/autotest_common.sh@850 -- # return 0 00:26:00.756 15:44:30 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:00.756 15:44:30 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:00.756 15:44:30 -- common/autotest_common.sh@10 -- # set +x 00:26:00.756 15:44:30 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:00.756 15:44:30 -- fips/fips.sh@133 -- # trap cleanup EXIT 00:26:00.756 15:44:30 -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:26:00.756 15:44:30 -- fips/fips.sh@137 -- # key_path=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:26:00.756 15:44:30 -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:26:00.756 15:44:30 -- fips/fips.sh@139 -- # chmod 0600 /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:26:00.756 15:44:30 -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:26:00.756 15:44:30 -- fips/fips.sh@22 -- # local key=/home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:26:00.756 15:44:30 -- fips/fips.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:01.014 [2024-04-26 15:44:31.218562] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:01.014 [2024-04-26 15:44:31.234525] tcp.c: 926:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:01.014 [2024-04-26 15:44:31.234779] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:01.014 [2024-04-26 15:44:31.266049] tcp.c:3655:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:26:01.014 malloc0 00:26:01.014 15:44:31 -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:01.014 15:44:31 -- fips/fips.sh@147 -- # bdevperf_pid=79159 00:26:01.014 15:44:31 -- fips/fips.sh@145 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:26:01.014 15:44:31 -- fips/fips.sh@148 -- # waitforlisten 79159 /var/tmp/bdevperf.sock 00:26:01.014 15:44:31 -- common/autotest_common.sh@817 -- # '[' -z 79159 ']' 00:26:01.014 15:44:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:01.015 15:44:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:01.015 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:01.015 15:44:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:01.015 15:44:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:01.015 15:44:31 -- common/autotest_common.sh@10 -- # set +x 00:26:01.273 [2024-04-26 15:44:31.362121] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:26:01.273 [2024-04-26 15:44:31.362216] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79159 ] 00:26:01.273 [2024-04-26 15:44:31.511012] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:01.532 [2024-04-26 15:44:31.640168] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:02.468 15:44:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:02.468 15:44:32 -- common/autotest_common.sh@850 -- # return 0 00:26:02.468 15:44:32 -- fips/fips.sh@150 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:26:02.468 [2024-04-26 15:44:32.638802] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:02.468 [2024-04-26 15:44:32.638939] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:26:02.468 TLSTESTn1 00:26:02.468 15:44:32 -- fips/fips.sh@154 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:02.726 Running I/O for 10 seconds... 00:26:12.691 00:26:12.691 Latency(us) 00:26:12.691 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:12.691 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:26:12.691 Verification LBA range: start 0x0 length 0x2000 00:26:12.691 TLSTESTn1 : 10.02 3622.03 14.15 0.00 0.00 35273.79 6821.70 43372.92 00:26:12.691 =================================================================================================================== 00:26:12.691 Total : 3622.03 14.15 0.00 0.00 35273.79 6821.70 43372.92 00:26:12.691 0 00:26:12.691 15:44:42 -- fips/fips.sh@1 -- # cleanup 00:26:12.691 15:44:42 -- fips/fips.sh@15 -- # process_shm --id 0 00:26:12.691 15:44:42 -- common/autotest_common.sh@794 -- # type=--id 00:26:12.691 15:44:42 -- common/autotest_common.sh@795 -- # id=0 00:26:12.691 15:44:42 -- common/autotest_common.sh@796 -- # '[' --id = --pid ']' 00:26:12.691 15:44:42 -- common/autotest_common.sh@800 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:26:12.691 15:44:42 -- common/autotest_common.sh@800 -- # shm_files=nvmf_trace.0 00:26:12.691 15:44:42 -- common/autotest_common.sh@802 -- # [[ -z nvmf_trace.0 ]] 00:26:12.691 15:44:42 -- common/autotest_common.sh@806 -- # for n in $shm_files 00:26:12.691 15:44:42 -- common/autotest_common.sh@807 -- # tar -C /dev/shm/ -cvzf /home/vagrant/spdk_repo/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:26:12.691 nvmf_trace.0 00:26:12.691 15:44:42 -- common/autotest_common.sh@809 -- # return 0 00:26:12.691 15:44:42 -- fips/fips.sh@16 -- # killprocess 79159 00:26:12.691 15:44:42 -- common/autotest_common.sh@936 -- # '[' -z 79159 ']' 00:26:12.691 15:44:42 -- common/autotest_common.sh@940 -- # kill -0 79159 00:26:12.691 15:44:42 -- common/autotest_common.sh@941 -- # uname 00:26:12.691 15:44:42 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:12.691 15:44:42 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79159 00:26:12.691 15:44:42 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:26:12.691 15:44:42 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:26:12.950 killing process with pid 79159 00:26:12.950 15:44:42 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79159' 00:26:12.950 Received shutdown signal, test time was about 10.000000 seconds 00:26:12.950 00:26:12.950 Latency(us) 00:26:12.950 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:12.950 =================================================================================================================== 00:26:12.950 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:12.950 15:44:42 -- common/autotest_common.sh@955 -- # kill 79159 00:26:12.950 [2024-04-26 15:44:42.984746] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:26:12.950 15:44:42 -- common/autotest_common.sh@960 -- # wait 79159 00:26:12.950 15:44:43 -- fips/fips.sh@17 -- # nvmftestfini 00:26:12.950 15:44:43 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:12.950 15:44:43 -- nvmf/common.sh@117 -- # sync 00:26:13.222 15:44:43 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:13.222 15:44:43 -- nvmf/common.sh@120 -- # set +e 00:26:13.222 15:44:43 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:13.222 15:44:43 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:13.222 rmmod nvme_tcp 00:26:13.222 rmmod nvme_fabrics 00:26:13.222 rmmod nvme_keyring 00:26:13.222 15:44:43 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:13.222 15:44:43 -- nvmf/common.sh@124 -- # set -e 00:26:13.222 15:44:43 -- nvmf/common.sh@125 -- # return 0 00:26:13.222 15:44:43 -- nvmf/common.sh@478 -- # '[' -n 79097 ']' 00:26:13.222 15:44:43 -- nvmf/common.sh@479 -- # killprocess 79097 00:26:13.222 15:44:43 -- common/autotest_common.sh@936 -- # '[' -z 79097 ']' 00:26:13.222 15:44:43 -- common/autotest_common.sh@940 -- # kill -0 79097 00:26:13.222 15:44:43 -- common/autotest_common.sh@941 -- # uname 00:26:13.222 15:44:43 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:13.222 15:44:43 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79097 00:26:13.222 15:44:43 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:13.222 killing process with pid 79097 00:26:13.222 15:44:43 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:13.222 15:44:43 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79097' 00:26:13.222 15:44:43 -- common/autotest_common.sh@955 -- # kill 79097 00:26:13.222 [2024-04-26 15:44:43.352825] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:26:13.222 15:44:43 -- common/autotest_common.sh@960 -- # wait 79097 00:26:13.480 15:44:43 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:13.480 15:44:43 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:13.480 15:44:43 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:13.480 15:44:43 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:13.480 15:44:43 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:13.480 15:44:43 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:13.480 15:44:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:13.480 15:44:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:13.480 15:44:43 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:26:13.480 15:44:43 -- fips/fips.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/fips/key.txt 00:26:13.480 00:26:13.480 real 0m14.443s 00:26:13.480 user 0m19.247s 00:26:13.480 sys 0m6.093s 00:26:13.480 15:44:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:13.480 15:44:43 -- common/autotest_common.sh@10 -- # set +x 00:26:13.480 ************************************ 00:26:13.480 END TEST nvmf_fips 00:26:13.480 ************************************ 00:26:13.480 15:44:43 -- nvmf/nvmf.sh@64 -- # '[' 0 -eq 1 ']' 00:26:13.480 15:44:43 -- nvmf/nvmf.sh@70 -- # [[ virt == phy ]] 00:26:13.480 15:44:43 -- nvmf/nvmf.sh@84 -- # timing_exit target 00:26:13.480 15:44:43 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:13.480 15:44:43 -- common/autotest_common.sh@10 -- # set +x 00:26:13.480 15:44:43 -- nvmf/nvmf.sh@86 -- # timing_enter host 00:26:13.480 15:44:43 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:13.480 15:44:43 -- common/autotest_common.sh@10 -- # set +x 00:26:13.480 15:44:43 -- nvmf/nvmf.sh@88 -- # [[ 0 -eq 0 ]] 00:26:13.480 15:44:43 -- nvmf/nvmf.sh@89 -- # run_test nvmf_multicontroller /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:26:13.480 15:44:43 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:13.480 15:44:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:13.480 15:44:43 -- common/autotest_common.sh@10 -- # set +x 00:26:13.737 ************************************ 00:26:13.737 START TEST nvmf_multicontroller 00:26:13.737 ************************************ 00:26:13.737 15:44:43 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:26:13.737 * Looking for test storage... 00:26:13.737 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:13.737 15:44:43 -- host/multicontroller.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:13.737 15:44:43 -- nvmf/common.sh@7 -- # uname -s 00:26:13.737 15:44:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:13.737 15:44:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:13.737 15:44:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:13.737 15:44:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:13.737 15:44:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:13.737 15:44:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:13.737 15:44:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:13.737 15:44:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:13.737 15:44:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:13.737 15:44:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:13.737 15:44:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:26:13.737 15:44:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:26:13.737 15:44:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:13.737 15:44:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:13.737 15:44:43 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:13.737 15:44:43 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:13.737 15:44:43 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:13.737 15:44:43 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:13.737 15:44:43 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:13.737 15:44:43 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:13.737 15:44:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.737 15:44:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.737 15:44:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.737 15:44:43 -- paths/export.sh@5 -- # export PATH 00:26:13.737 15:44:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:13.737 15:44:43 -- nvmf/common.sh@47 -- # : 0 00:26:13.737 15:44:43 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:13.737 15:44:43 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:13.737 15:44:43 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:13.737 15:44:43 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:13.737 15:44:43 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:13.737 15:44:43 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:13.737 15:44:43 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:13.737 15:44:43 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:13.737 15:44:43 -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:13.738 15:44:43 -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:13.738 15:44:43 -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:26:13.738 15:44:43 -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:26:13.738 15:44:43 -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:13.738 15:44:43 -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:26:13.738 15:44:43 -- host/multicontroller.sh@23 -- # nvmftestinit 00:26:13.738 15:44:43 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:13.738 15:44:43 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:13.738 15:44:43 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:13.738 15:44:43 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:13.738 15:44:43 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:13.738 15:44:43 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:13.738 15:44:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:13.738 15:44:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:13.738 15:44:43 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:26:13.738 15:44:43 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:26:13.738 15:44:43 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:26:13.738 15:44:43 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:26:13.738 15:44:43 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:26:13.738 15:44:43 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:26:13.738 15:44:43 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:13.738 15:44:43 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:13.738 15:44:43 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:13.738 15:44:43 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:26:13.738 15:44:43 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:13.738 15:44:43 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:13.738 15:44:43 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:13.738 15:44:43 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:13.738 15:44:43 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:13.738 15:44:43 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:13.738 15:44:43 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:13.738 15:44:43 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:13.738 15:44:43 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:26:13.738 15:44:43 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:26:13.738 Cannot find device "nvmf_tgt_br" 00:26:13.738 15:44:43 -- nvmf/common.sh@155 -- # true 00:26:13.738 15:44:43 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:26:13.738 Cannot find device "nvmf_tgt_br2" 00:26:13.738 15:44:43 -- nvmf/common.sh@156 -- # true 00:26:13.738 15:44:43 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:26:13.738 15:44:43 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:26:13.738 Cannot find device "nvmf_tgt_br" 00:26:13.738 15:44:43 -- nvmf/common.sh@158 -- # true 00:26:13.738 15:44:43 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:26:13.738 Cannot find device "nvmf_tgt_br2" 00:26:13.738 15:44:43 -- nvmf/common.sh@159 -- # true 00:26:13.738 15:44:43 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:26:13.738 15:44:44 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:26:13.738 15:44:44 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:13.738 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:13.738 15:44:44 -- nvmf/common.sh@162 -- # true 00:26:13.738 15:44:44 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:13.738 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:13.738 15:44:44 -- nvmf/common.sh@163 -- # true 00:26:13.738 15:44:44 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:26:13.996 15:44:44 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:13.996 15:44:44 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:13.996 15:44:44 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:13.996 15:44:44 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:13.996 15:44:44 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:13.996 15:44:44 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:13.996 15:44:44 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:13.996 15:44:44 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:13.996 15:44:44 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:26:13.996 15:44:44 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:26:13.996 15:44:44 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:26:13.996 15:44:44 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:26:13.996 15:44:44 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:13.996 15:44:44 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:13.996 15:44:44 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:13.996 15:44:44 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:26:13.996 15:44:44 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:26:13.996 15:44:44 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:26:13.996 15:44:44 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:13.996 15:44:44 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:13.996 15:44:44 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:13.996 15:44:44 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:13.996 15:44:44 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:26:13.996 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:13.996 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:26:13.996 00:26:13.996 --- 10.0.0.2 ping statistics --- 00:26:13.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:13.996 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:26:13.996 15:44:44 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:26:13.996 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:13.996 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.039 ms 00:26:13.996 00:26:13.996 --- 10.0.0.3 ping statistics --- 00:26:13.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:13.996 rtt min/avg/max/mdev = 0.039/0.039/0.039/0.000 ms 00:26:13.996 15:44:44 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:13.996 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:13.996 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.025 ms 00:26:13.996 00:26:13.996 --- 10.0.0.1 ping statistics --- 00:26:13.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:13.996 rtt min/avg/max/mdev = 0.025/0.025/0.025/0.000 ms 00:26:13.996 15:44:44 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:13.996 15:44:44 -- nvmf/common.sh@422 -- # return 0 00:26:13.996 15:44:44 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:26:13.996 15:44:44 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:13.996 15:44:44 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:26:13.996 15:44:44 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:26:13.996 15:44:44 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:13.996 15:44:44 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:26:13.996 15:44:44 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:26:13.996 15:44:44 -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:26:13.996 15:44:44 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:13.996 15:44:44 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:13.996 15:44:44 -- common/autotest_common.sh@10 -- # set +x 00:26:13.996 15:44:44 -- nvmf/common.sh@470 -- # nvmfpid=79522 00:26:13.996 15:44:44 -- nvmf/common.sh@471 -- # waitforlisten 79522 00:26:13.996 15:44:44 -- common/autotest_common.sh@817 -- # '[' -z 79522 ']' 00:26:13.996 15:44:44 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:13.996 15:44:44 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:13.996 15:44:44 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:13.996 15:44:44 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:13.996 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:13.996 15:44:44 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:13.996 15:44:44 -- common/autotest_common.sh@10 -- # set +x 00:26:13.996 [2024-04-26 15:44:44.269545] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:26:13.996 [2024-04-26 15:44:44.269622] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:14.255 [2024-04-26 15:44:44.401752] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:14.255 [2024-04-26 15:44:44.516871] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:14.255 [2024-04-26 15:44:44.516930] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:14.255 [2024-04-26 15:44:44.516941] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:14.255 [2024-04-26 15:44:44.516949] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:14.255 [2024-04-26 15:44:44.516956] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:14.255 [2024-04-26 15:44:44.518043] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:14.255 [2024-04-26 15:44:44.518229] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:14.255 [2024-04-26 15:44:44.518224] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:15.190 15:44:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:15.190 15:44:45 -- common/autotest_common.sh@850 -- # return 0 00:26:15.190 15:44:45 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:15.190 15:44:45 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:15.190 15:44:45 -- common/autotest_common.sh@10 -- # set +x 00:26:15.190 15:44:45 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:15.190 15:44:45 -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:15.190 15:44:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:15.190 15:44:45 -- common/autotest_common.sh@10 -- # set +x 00:26:15.190 [2024-04-26 15:44:45.337363] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:15.190 15:44:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:15.190 15:44:45 -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:15.190 15:44:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:15.190 15:44:45 -- common/autotest_common.sh@10 -- # set +x 00:26:15.190 Malloc0 00:26:15.190 15:44:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:15.190 15:44:45 -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:15.190 15:44:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:15.190 15:44:45 -- common/autotest_common.sh@10 -- # set +x 00:26:15.190 15:44:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:15.190 15:44:45 -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:15.190 15:44:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:15.190 15:44:45 -- common/autotest_common.sh@10 -- # set +x 00:26:15.190 15:44:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:15.190 15:44:45 -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:15.190 15:44:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:15.190 15:44:45 -- common/autotest_common.sh@10 -- # set +x 00:26:15.190 [2024-04-26 15:44:45.406896] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:15.190 15:44:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:15.190 15:44:45 -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:15.190 15:44:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:15.190 15:44:45 -- common/autotest_common.sh@10 -- # set +x 00:26:15.190 [2024-04-26 15:44:45.414807] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:15.190 15:44:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:15.190 15:44:45 -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:15.190 15:44:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:15.190 15:44:45 -- common/autotest_common.sh@10 -- # set +x 00:26:15.190 Malloc1 00:26:15.190 15:44:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:15.190 15:44:45 -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:26:15.190 15:44:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:15.190 15:44:45 -- common/autotest_common.sh@10 -- # set +x 00:26:15.190 15:44:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:15.190 15:44:45 -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:26:15.190 15:44:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:15.190 15:44:45 -- common/autotest_common.sh@10 -- # set +x 00:26:15.190 15:44:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:15.190 15:44:45 -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:15.190 15:44:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:15.190 15:44:45 -- common/autotest_common.sh@10 -- # set +x 00:26:15.190 15:44:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:15.190 15:44:45 -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:26:15.190 15:44:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:15.190 15:44:45 -- common/autotest_common.sh@10 -- # set +x 00:26:15.190 15:44:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:15.190 15:44:45 -- host/multicontroller.sh@44 -- # bdevperf_pid=79580 00:26:15.190 15:44:45 -- host/multicontroller.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:26:15.190 15:44:45 -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:15.190 15:44:45 -- host/multicontroller.sh@47 -- # waitforlisten 79580 /var/tmp/bdevperf.sock 00:26:15.190 15:44:45 -- common/autotest_common.sh@817 -- # '[' -z 79580 ']' 00:26:15.190 15:44:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:15.190 15:44:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:15.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:15.191 15:44:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:15.191 15:44:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:15.191 15:44:45 -- common/autotest_common.sh@10 -- # set +x 00:26:16.566 15:44:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:16.566 15:44:46 -- common/autotest_common.sh@850 -- # return 0 00:26:16.566 15:44:46 -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:26:16.566 15:44:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:16.566 15:44:46 -- common/autotest_common.sh@10 -- # set +x 00:26:16.566 NVMe0n1 00:26:16.566 15:44:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:16.566 15:44:46 -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:16.566 15:44:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:16.566 15:44:46 -- common/autotest_common.sh@10 -- # set +x 00:26:16.566 15:44:46 -- host/multicontroller.sh@54 -- # grep -c NVMe 00:26:16.566 15:44:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:16.566 1 00:26:16.567 15:44:46 -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:26:16.567 15:44:46 -- common/autotest_common.sh@638 -- # local es=0 00:26:16.567 15:44:46 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:26:16.567 15:44:46 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:26:16.567 15:44:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:16.567 15:44:46 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:26:16.567 15:44:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:16.567 15:44:46 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:26:16.567 15:44:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:16.567 15:44:46 -- common/autotest_common.sh@10 -- # set +x 00:26:16.567 2024/04/26 15:44:46 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostnqn:nqn.2021-09-7.io.spdk:00001 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:26:16.567 request: 00:26:16.567 { 00:26:16.567 "method": "bdev_nvme_attach_controller", 00:26:16.567 "params": { 00:26:16.567 "name": "NVMe0", 00:26:16.567 "trtype": "tcp", 00:26:16.567 "traddr": "10.0.0.2", 00:26:16.567 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:26:16.567 "hostaddr": "10.0.0.2", 00:26:16.567 "hostsvcid": "60000", 00:26:16.567 "adrfam": "ipv4", 00:26:16.567 "trsvcid": "4420", 00:26:16.567 "subnqn": "nqn.2016-06.io.spdk:cnode1" 00:26:16.567 } 00:26:16.567 } 00:26:16.567 Got JSON-RPC error response 00:26:16.567 GoRPCClient: error on JSON-RPC call 00:26:16.567 15:44:46 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:26:16.567 15:44:46 -- common/autotest_common.sh@641 -- # es=1 00:26:16.567 15:44:46 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:26:16.567 15:44:46 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:26:16.567 15:44:46 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:26:16.567 15:44:46 -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:26:16.567 15:44:46 -- common/autotest_common.sh@638 -- # local es=0 00:26:16.567 15:44:46 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:26:16.567 15:44:46 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:26:16.567 15:44:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:16.567 15:44:46 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:26:16.567 15:44:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:16.567 15:44:46 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:26:16.567 15:44:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:16.567 15:44:46 -- common/autotest_common.sh@10 -- # set +x 00:26:16.567 2024/04/26 15:44:46 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode2 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:26:16.567 request: 00:26:16.567 { 00:26:16.567 "method": "bdev_nvme_attach_controller", 00:26:16.567 "params": { 00:26:16.567 "name": "NVMe0", 00:26:16.567 "trtype": "tcp", 00:26:16.567 "traddr": "10.0.0.2", 00:26:16.567 "hostaddr": "10.0.0.2", 00:26:16.567 "hostsvcid": "60000", 00:26:16.567 "adrfam": "ipv4", 00:26:16.567 "trsvcid": "4420", 00:26:16.567 "subnqn": "nqn.2016-06.io.spdk:cnode2" 00:26:16.567 } 00:26:16.567 } 00:26:16.567 Got JSON-RPC error response 00:26:16.567 GoRPCClient: error on JSON-RPC call 00:26:16.567 15:44:46 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:26:16.567 15:44:46 -- common/autotest_common.sh@641 -- # es=1 00:26:16.567 15:44:46 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:26:16.567 15:44:46 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:26:16.567 15:44:46 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:26:16.567 15:44:46 -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:26:16.567 15:44:46 -- common/autotest_common.sh@638 -- # local es=0 00:26:16.567 15:44:46 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:26:16.567 15:44:46 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:26:16.567 15:44:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:16.567 15:44:46 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:26:16.567 15:44:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:16.567 15:44:46 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:26:16.567 15:44:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:16.567 15:44:46 -- common/autotest_common.sh@10 -- # set +x 00:26:16.567 2024/04/26 15:44:46 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:disable name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists and multipath is disabled 00:26:16.567 request: 00:26:16.567 { 00:26:16.567 "method": "bdev_nvme_attach_controller", 00:26:16.567 "params": { 00:26:16.567 "name": "NVMe0", 00:26:16.567 "trtype": "tcp", 00:26:16.567 "traddr": "10.0.0.2", 00:26:16.567 "hostaddr": "10.0.0.2", 00:26:16.567 "hostsvcid": "60000", 00:26:16.567 "adrfam": "ipv4", 00:26:16.567 "trsvcid": "4420", 00:26:16.567 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:16.567 "multipath": "disable" 00:26:16.567 } 00:26:16.567 } 00:26:16.567 Got JSON-RPC error response 00:26:16.567 GoRPCClient: error on JSON-RPC call 00:26:16.567 15:44:46 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:26:16.567 15:44:46 -- common/autotest_common.sh@641 -- # es=1 00:26:16.567 15:44:46 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:26:16.567 15:44:46 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:26:16.567 15:44:46 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:26:16.567 15:44:46 -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:26:16.567 15:44:46 -- common/autotest_common.sh@638 -- # local es=0 00:26:16.567 15:44:46 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:26:16.567 15:44:46 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:26:16.567 15:44:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:16.567 15:44:46 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:26:16.567 15:44:46 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:26:16.567 15:44:46 -- common/autotest_common.sh@641 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:26:16.567 15:44:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:16.567 15:44:46 -- common/autotest_common.sh@10 -- # set +x 00:26:16.567 2024/04/26 15:44:46 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostaddr:10.0.0.2 hostsvcid:60000 multipath:failover name:NVMe0 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-114 Msg=A controller named NVMe0 already exists with the specified network path 00:26:16.567 request: 00:26:16.567 { 00:26:16.567 "method": "bdev_nvme_attach_controller", 00:26:16.567 "params": { 00:26:16.567 "name": "NVMe0", 00:26:16.567 "trtype": "tcp", 00:26:16.567 "traddr": "10.0.0.2", 00:26:16.567 "hostaddr": "10.0.0.2", 00:26:16.567 "hostsvcid": "60000", 00:26:16.567 "adrfam": "ipv4", 00:26:16.567 "trsvcid": "4420", 00:26:16.567 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:16.567 "multipath": "failover" 00:26:16.567 } 00:26:16.567 } 00:26:16.567 Got JSON-RPC error response 00:26:16.567 GoRPCClient: error on JSON-RPC call 00:26:16.567 15:44:46 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:26:16.567 15:44:46 -- common/autotest_common.sh@641 -- # es=1 00:26:16.567 15:44:46 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:26:16.567 15:44:46 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:26:16.567 15:44:46 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:26:16.567 15:44:46 -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:16.567 15:44:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:16.567 15:44:46 -- common/autotest_common.sh@10 -- # set +x 00:26:16.567 00:26:16.567 15:44:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:16.567 15:44:46 -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:16.567 15:44:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:16.567 15:44:46 -- common/autotest_common.sh@10 -- # set +x 00:26:16.567 15:44:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:16.567 15:44:46 -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:26:16.567 15:44:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:16.567 15:44:46 -- common/autotest_common.sh@10 -- # set +x 00:26:16.567 00:26:16.567 15:44:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:16.567 15:44:46 -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:16.567 15:44:46 -- host/multicontroller.sh@90 -- # grep -c NVMe 00:26:16.567 15:44:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:16.567 15:44:46 -- common/autotest_common.sh@10 -- # set +x 00:26:16.567 15:44:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:16.567 15:44:46 -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:26:16.568 15:44:46 -- host/multicontroller.sh@95 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:17.945 0 00:26:17.945 15:44:47 -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:26:17.945 15:44:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:17.945 15:44:47 -- common/autotest_common.sh@10 -- # set +x 00:26:17.945 15:44:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:17.945 15:44:47 -- host/multicontroller.sh@100 -- # killprocess 79580 00:26:17.945 15:44:47 -- common/autotest_common.sh@936 -- # '[' -z 79580 ']' 00:26:17.945 15:44:47 -- common/autotest_common.sh@940 -- # kill -0 79580 00:26:17.945 15:44:47 -- common/autotest_common.sh@941 -- # uname 00:26:17.945 15:44:47 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:17.945 15:44:47 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79580 00:26:17.945 killing process with pid 79580 00:26:17.945 15:44:47 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:17.945 15:44:47 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:17.945 15:44:47 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79580' 00:26:17.945 15:44:47 -- common/autotest_common.sh@955 -- # kill 79580 00:26:17.945 15:44:47 -- common/autotest_common.sh@960 -- # wait 79580 00:26:17.945 15:44:48 -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:17.945 15:44:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:17.945 15:44:48 -- common/autotest_common.sh@10 -- # set +x 00:26:18.203 15:44:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:18.203 15:44:48 -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:18.204 15:44:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:18.204 15:44:48 -- common/autotest_common.sh@10 -- # set +x 00:26:18.204 15:44:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:18.204 15:44:48 -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:26:18.204 15:44:48 -- host/multicontroller.sh@107 -- # pap /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:26:18.204 15:44:48 -- common/autotest_common.sh@1598 -- # read -r file 00:26:18.204 15:44:48 -- common/autotest_common.sh@1597 -- # find /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt -type f 00:26:18.204 15:44:48 -- common/autotest_common.sh@1597 -- # sort -u 00:26:18.204 15:44:48 -- common/autotest_common.sh@1599 -- # cat 00:26:18.204 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:26:18.204 [2024-04-26 15:44:45.534392] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:26:18.204 [2024-04-26 15:44:45.534522] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79580 ] 00:26:18.204 [2024-04-26 15:44:45.675852] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:18.204 [2024-04-26 15:44:45.801834] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:18.204 [2024-04-26 15:44:46.782166] bdev.c:4553:bdev_name_add: *ERROR*: Bdev name 97603dce-f31e-497f-baed-679700fb4fcb already exists 00:26:18.204 [2024-04-26 15:44:46.782259] bdev.c:7656:bdev_register: *ERROR*: Unable to add uuid:97603dce-f31e-497f-baed-679700fb4fcb alias for bdev NVMe1n1 00:26:18.204 [2024-04-26 15:44:46.782281] bdev_nvme.c:4272:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:26:18.204 Running I/O for 1 seconds... 00:26:18.204 00:26:18.204 Latency(us) 00:26:18.204 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:18.204 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:26:18.204 NVMe0n1 : 1.00 19773.88 77.24 0.00 0.00 6455.33 2517.18 14417.92 00:26:18.204 =================================================================================================================== 00:26:18.204 Total : 19773.88 77.24 0.00 0.00 6455.33 2517.18 14417.92 00:26:18.204 Received shutdown signal, test time was about 1.000000 seconds 00:26:18.204 00:26:18.204 Latency(us) 00:26:18.204 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:18.204 =================================================================================================================== 00:26:18.204 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:18.204 --- /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt --- 00:26:18.204 15:44:48 -- common/autotest_common.sh@1604 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:26:18.204 15:44:48 -- common/autotest_common.sh@1598 -- # read -r file 00:26:18.204 15:44:48 -- host/multicontroller.sh@108 -- # nvmftestfini 00:26:18.204 15:44:48 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:18.204 15:44:48 -- nvmf/common.sh@117 -- # sync 00:26:18.204 15:44:48 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:18.204 15:44:48 -- nvmf/common.sh@120 -- # set +e 00:26:18.204 15:44:48 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:18.204 15:44:48 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:18.204 rmmod nvme_tcp 00:26:18.204 rmmod nvme_fabrics 00:26:18.204 rmmod nvme_keyring 00:26:18.204 15:44:48 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:18.204 15:44:48 -- nvmf/common.sh@124 -- # set -e 00:26:18.204 15:44:48 -- nvmf/common.sh@125 -- # return 0 00:26:18.204 15:44:48 -- nvmf/common.sh@478 -- # '[' -n 79522 ']' 00:26:18.204 15:44:48 -- nvmf/common.sh@479 -- # killprocess 79522 00:26:18.204 15:44:48 -- common/autotest_common.sh@936 -- # '[' -z 79522 ']' 00:26:18.204 15:44:48 -- common/autotest_common.sh@940 -- # kill -0 79522 00:26:18.204 15:44:48 -- common/autotest_common.sh@941 -- # uname 00:26:18.204 15:44:48 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:18.204 15:44:48 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79522 00:26:18.204 15:44:48 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:26:18.204 killing process with pid 79522 00:26:18.204 15:44:48 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:26:18.204 15:44:48 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79522' 00:26:18.204 15:44:48 -- common/autotest_common.sh@955 -- # kill 79522 00:26:18.204 15:44:48 -- common/autotest_common.sh@960 -- # wait 79522 00:26:18.467 15:44:48 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:18.467 15:44:48 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:18.467 15:44:48 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:18.467 15:44:48 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:18.467 15:44:48 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:18.467 15:44:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:18.467 15:44:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:18.467 15:44:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:18.467 15:44:48 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:26:18.467 00:26:18.467 real 0m4.919s 00:26:18.467 user 0m15.511s 00:26:18.467 sys 0m1.035s 00:26:18.467 ************************************ 00:26:18.467 END TEST nvmf_multicontroller 00:26:18.467 ************************************ 00:26:18.467 15:44:48 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:18.467 15:44:48 -- common/autotest_common.sh@10 -- # set +x 00:26:18.736 15:44:48 -- nvmf/nvmf.sh@90 -- # run_test nvmf_aer /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:26:18.736 15:44:48 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:18.736 15:44:48 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:18.736 15:44:48 -- common/autotest_common.sh@10 -- # set +x 00:26:18.736 ************************************ 00:26:18.736 START TEST nvmf_aer 00:26:18.736 ************************************ 00:26:18.736 15:44:48 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/aer.sh --transport=tcp 00:26:18.736 * Looking for test storage... 00:26:18.736 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:18.736 15:44:48 -- host/aer.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:18.736 15:44:48 -- nvmf/common.sh@7 -- # uname -s 00:26:18.736 15:44:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:18.736 15:44:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:18.736 15:44:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:18.736 15:44:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:18.736 15:44:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:18.736 15:44:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:18.736 15:44:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:18.736 15:44:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:18.736 15:44:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:18.736 15:44:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:18.736 15:44:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:26:18.736 15:44:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:26:18.736 15:44:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:18.736 15:44:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:18.736 15:44:48 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:18.736 15:44:48 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:18.736 15:44:48 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:18.736 15:44:48 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:18.736 15:44:48 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:18.736 15:44:48 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:18.736 15:44:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:18.736 15:44:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:18.736 15:44:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:18.736 15:44:48 -- paths/export.sh@5 -- # export PATH 00:26:18.736 15:44:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:18.736 15:44:48 -- nvmf/common.sh@47 -- # : 0 00:26:18.736 15:44:48 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:18.736 15:44:48 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:18.736 15:44:48 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:18.736 15:44:48 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:18.736 15:44:48 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:18.736 15:44:48 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:18.736 15:44:48 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:18.736 15:44:48 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:18.737 15:44:48 -- host/aer.sh@11 -- # nvmftestinit 00:26:18.737 15:44:48 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:18.737 15:44:48 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:18.737 15:44:48 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:18.737 15:44:48 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:18.737 15:44:48 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:18.737 15:44:48 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:18.737 15:44:48 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:18.737 15:44:48 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:18.737 15:44:48 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:26:18.737 15:44:48 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:26:18.737 15:44:48 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:26:18.737 15:44:48 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:26:18.737 15:44:48 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:26:18.737 15:44:48 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:26:18.737 15:44:48 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:18.737 15:44:48 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:18.737 15:44:48 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:18.737 15:44:48 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:26:18.737 15:44:48 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:18.737 15:44:48 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:18.737 15:44:48 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:18.737 15:44:48 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:18.737 15:44:48 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:18.737 15:44:48 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:18.737 15:44:48 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:18.737 15:44:48 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:18.737 15:44:48 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:26:18.737 15:44:48 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:26:18.737 Cannot find device "nvmf_tgt_br" 00:26:18.737 15:44:48 -- nvmf/common.sh@155 -- # true 00:26:18.737 15:44:48 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:26:18.737 Cannot find device "nvmf_tgt_br2" 00:26:18.737 15:44:48 -- nvmf/common.sh@156 -- # true 00:26:18.737 15:44:48 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:26:18.737 15:44:48 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:26:18.737 Cannot find device "nvmf_tgt_br" 00:26:18.737 15:44:49 -- nvmf/common.sh@158 -- # true 00:26:18.737 15:44:49 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:26:18.737 Cannot find device "nvmf_tgt_br2" 00:26:18.737 15:44:49 -- nvmf/common.sh@159 -- # true 00:26:18.737 15:44:49 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:26:18.996 15:44:49 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:26:18.996 15:44:49 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:18.996 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:18.996 15:44:49 -- nvmf/common.sh@162 -- # true 00:26:18.996 15:44:49 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:18.996 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:18.996 15:44:49 -- nvmf/common.sh@163 -- # true 00:26:18.996 15:44:49 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:26:18.996 15:44:49 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:18.996 15:44:49 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:18.996 15:44:49 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:18.996 15:44:49 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:18.996 15:44:49 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:18.996 15:44:49 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:18.996 15:44:49 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:18.996 15:44:49 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:18.996 15:44:49 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:26:18.996 15:44:49 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:26:18.996 15:44:49 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:26:18.996 15:44:49 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:26:18.996 15:44:49 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:18.996 15:44:49 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:18.996 15:44:49 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:18.996 15:44:49 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:26:18.996 15:44:49 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:26:18.996 15:44:49 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:26:18.996 15:44:49 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:18.996 15:44:49 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:18.996 15:44:49 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:18.996 15:44:49 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:18.996 15:44:49 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:26:18.996 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:18.996 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.064 ms 00:26:18.996 00:26:18.996 --- 10.0.0.2 ping statistics --- 00:26:18.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:18.996 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:26:18.996 15:44:49 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:26:18.996 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:18.996 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.038 ms 00:26:18.996 00:26:18.996 --- 10.0.0.3 ping statistics --- 00:26:18.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:18.996 rtt min/avg/max/mdev = 0.038/0.038/0.038/0.000 ms 00:26:18.996 15:44:49 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:18.996 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:18.996 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.019 ms 00:26:18.996 00:26:18.996 --- 10.0.0.1 ping statistics --- 00:26:18.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:18.996 rtt min/avg/max/mdev = 0.019/0.019/0.019/0.000 ms 00:26:18.996 15:44:49 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:18.996 15:44:49 -- nvmf/common.sh@422 -- # return 0 00:26:18.996 15:44:49 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:26:18.996 15:44:49 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:18.996 15:44:49 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:26:18.996 15:44:49 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:26:18.996 15:44:49 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:18.996 15:44:49 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:26:18.996 15:44:49 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:26:19.255 15:44:49 -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:26:19.255 15:44:49 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:19.255 15:44:49 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:19.255 15:44:49 -- common/autotest_common.sh@10 -- # set +x 00:26:19.255 15:44:49 -- nvmf/common.sh@470 -- # nvmfpid=79825 00:26:19.255 15:44:49 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:19.255 15:44:49 -- nvmf/common.sh@471 -- # waitforlisten 79825 00:26:19.255 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:19.255 15:44:49 -- common/autotest_common.sh@817 -- # '[' -z 79825 ']' 00:26:19.255 15:44:49 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:19.255 15:44:49 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:19.255 15:44:49 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:19.255 15:44:49 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:19.255 15:44:49 -- common/autotest_common.sh@10 -- # set +x 00:26:19.255 [2024-04-26 15:44:49.357252] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:26:19.255 [2024-04-26 15:44:49.357346] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:19.255 [2024-04-26 15:44:49.500087] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:19.513 [2024-04-26 15:44:49.618876] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:19.513 [2024-04-26 15:44:49.619210] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:19.513 [2024-04-26 15:44:49.619366] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:19.513 [2024-04-26 15:44:49.619525] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:19.513 [2024-04-26 15:44:49.619560] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:19.513 [2024-04-26 15:44:49.619849] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:19.513 [2024-04-26 15:44:49.619990] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:19.513 [2024-04-26 15:44:49.620076] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:19.513 [2024-04-26 15:44:49.620075] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:20.079 15:44:50 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:20.079 15:44:50 -- common/autotest_common.sh@850 -- # return 0 00:26:20.079 15:44:50 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:20.079 15:44:50 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:20.079 15:44:50 -- common/autotest_common.sh@10 -- # set +x 00:26:20.079 15:44:50 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:20.079 15:44:50 -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:20.079 15:44:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:20.079 15:44:50 -- common/autotest_common.sh@10 -- # set +x 00:26:20.079 [2024-04-26 15:44:50.359794] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:20.337 15:44:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:20.337 15:44:50 -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:26:20.337 15:44:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:20.337 15:44:50 -- common/autotest_common.sh@10 -- # set +x 00:26:20.337 Malloc0 00:26:20.337 15:44:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:20.337 15:44:50 -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:26:20.337 15:44:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:20.337 15:44:50 -- common/autotest_common.sh@10 -- # set +x 00:26:20.337 15:44:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:20.337 15:44:50 -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:20.337 15:44:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:20.337 15:44:50 -- common/autotest_common.sh@10 -- # set +x 00:26:20.337 15:44:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:20.337 15:44:50 -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:20.337 15:44:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:20.337 15:44:50 -- common/autotest_common.sh@10 -- # set +x 00:26:20.337 [2024-04-26 15:44:50.439982] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:20.337 15:44:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:20.337 15:44:50 -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:26:20.337 15:44:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:20.337 15:44:50 -- common/autotest_common.sh@10 -- # set +x 00:26:20.337 [2024-04-26 15:44:50.447733] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:26:20.337 [ 00:26:20.337 { 00:26:20.337 "allow_any_host": true, 00:26:20.337 "hosts": [], 00:26:20.337 "listen_addresses": [], 00:26:20.337 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:20.337 "subtype": "Discovery" 00:26:20.337 }, 00:26:20.337 { 00:26:20.337 "allow_any_host": true, 00:26:20.337 "hosts": [], 00:26:20.337 "listen_addresses": [ 00:26:20.337 { 00:26:20.337 "adrfam": "IPv4", 00:26:20.337 "traddr": "10.0.0.2", 00:26:20.337 "transport": "TCP", 00:26:20.337 "trsvcid": "4420", 00:26:20.337 "trtype": "TCP" 00:26:20.337 } 00:26:20.337 ], 00:26:20.337 "max_cntlid": 65519, 00:26:20.337 "max_namespaces": 2, 00:26:20.337 "min_cntlid": 1, 00:26:20.337 "model_number": "SPDK bdev Controller", 00:26:20.337 "namespaces": [ 00:26:20.337 { 00:26:20.337 "bdev_name": "Malloc0", 00:26:20.337 "name": "Malloc0", 00:26:20.337 "nguid": "B33104C3B59A4A74AB16F464B028E4C2", 00:26:20.337 "nsid": 1, 00:26:20.337 "uuid": "b33104c3-b59a-4a74-ab16-f464b028e4c2" 00:26:20.337 } 00:26:20.337 ], 00:26:20.337 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:20.337 "serial_number": "SPDK00000000000001", 00:26:20.337 "subtype": "NVMe" 00:26:20.337 } 00:26:20.337 ] 00:26:20.337 15:44:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:20.337 15:44:50 -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:26:20.337 15:44:50 -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:26:20.337 15:44:50 -- host/aer.sh@33 -- # aerpid=79886 00:26:20.337 15:44:50 -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:26:20.337 15:44:50 -- host/aer.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:26:20.337 15:44:50 -- common/autotest_common.sh@1251 -- # local i=0 00:26:20.337 15:44:50 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:20.337 15:44:50 -- common/autotest_common.sh@1253 -- # '[' 0 -lt 200 ']' 00:26:20.337 15:44:50 -- common/autotest_common.sh@1254 -- # i=1 00:26:20.337 15:44:50 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:26:20.337 15:44:50 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:20.337 15:44:50 -- common/autotest_common.sh@1253 -- # '[' 1 -lt 200 ']' 00:26:20.337 15:44:50 -- common/autotest_common.sh@1254 -- # i=2 00:26:20.337 15:44:50 -- common/autotest_common.sh@1255 -- # sleep 0.1 00:26:20.595 15:44:50 -- common/autotest_common.sh@1252 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:20.595 15:44:50 -- common/autotest_common.sh@1258 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:26:20.595 15:44:50 -- common/autotest_common.sh@1262 -- # return 0 00:26:20.595 15:44:50 -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:26:20.595 15:44:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:20.595 15:44:50 -- common/autotest_common.sh@10 -- # set +x 00:26:20.595 Malloc1 00:26:20.595 15:44:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:20.595 15:44:50 -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:26:20.595 15:44:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:20.595 15:44:50 -- common/autotest_common.sh@10 -- # set +x 00:26:20.595 15:44:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:20.595 15:44:50 -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:26:20.595 15:44:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:20.595 15:44:50 -- common/autotest_common.sh@10 -- # set +x 00:26:20.595 Asynchronous Event Request test 00:26:20.595 Attaching to 10.0.0.2 00:26:20.595 Attached to 10.0.0.2 00:26:20.595 Registering asynchronous event callbacks... 00:26:20.595 Starting namespace attribute notice tests for all controllers... 00:26:20.595 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:26:20.595 aer_cb - Changed Namespace 00:26:20.595 Cleaning up... 00:26:20.595 [ 00:26:20.595 { 00:26:20.595 "allow_any_host": true, 00:26:20.595 "hosts": [], 00:26:20.595 "listen_addresses": [], 00:26:20.595 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:20.595 "subtype": "Discovery" 00:26:20.595 }, 00:26:20.595 { 00:26:20.595 "allow_any_host": true, 00:26:20.595 "hosts": [], 00:26:20.595 "listen_addresses": [ 00:26:20.595 { 00:26:20.595 "adrfam": "IPv4", 00:26:20.595 "traddr": "10.0.0.2", 00:26:20.595 "transport": "TCP", 00:26:20.595 "trsvcid": "4420", 00:26:20.595 "trtype": "TCP" 00:26:20.595 } 00:26:20.595 ], 00:26:20.595 "max_cntlid": 65519, 00:26:20.595 "max_namespaces": 2, 00:26:20.595 "min_cntlid": 1, 00:26:20.595 "model_number": "SPDK bdev Controller", 00:26:20.595 "namespaces": [ 00:26:20.595 { 00:26:20.595 "bdev_name": "Malloc0", 00:26:20.595 "name": "Malloc0", 00:26:20.595 "nguid": "B33104C3B59A4A74AB16F464B028E4C2", 00:26:20.595 "nsid": 1, 00:26:20.595 "uuid": "b33104c3-b59a-4a74-ab16-f464b028e4c2" 00:26:20.595 }, 00:26:20.595 { 00:26:20.595 "bdev_name": "Malloc1", 00:26:20.595 "name": "Malloc1", 00:26:20.595 "nguid": "B3AF0DEE8BB64D77A1BEF01EDAD861F8", 00:26:20.595 "nsid": 2, 00:26:20.595 "uuid": "b3af0dee-8bb6-4d77-a1be-f01edad861f8" 00:26:20.595 } 00:26:20.595 ], 00:26:20.595 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:20.595 "serial_number": "SPDK00000000000001", 00:26:20.595 "subtype": "NVMe" 00:26:20.595 } 00:26:20.595 ] 00:26:20.595 15:44:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:20.595 15:44:50 -- host/aer.sh@43 -- # wait 79886 00:26:20.595 15:44:50 -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:26:20.595 15:44:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:20.595 15:44:50 -- common/autotest_common.sh@10 -- # set +x 00:26:20.595 15:44:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:20.595 15:44:50 -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:26:20.595 15:44:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:20.595 15:44:50 -- common/autotest_common.sh@10 -- # set +x 00:26:20.595 15:44:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:20.595 15:44:50 -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:20.595 15:44:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:20.595 15:44:50 -- common/autotest_common.sh@10 -- # set +x 00:26:20.595 15:44:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:20.595 15:44:50 -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:26:20.595 15:44:50 -- host/aer.sh@51 -- # nvmftestfini 00:26:20.595 15:44:50 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:20.595 15:44:50 -- nvmf/common.sh@117 -- # sync 00:26:20.595 15:44:50 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:20.595 15:44:50 -- nvmf/common.sh@120 -- # set +e 00:26:20.595 15:44:50 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:20.595 15:44:50 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:20.595 rmmod nvme_tcp 00:26:20.853 rmmod nvme_fabrics 00:26:20.853 rmmod nvme_keyring 00:26:20.853 15:44:50 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:20.853 15:44:50 -- nvmf/common.sh@124 -- # set -e 00:26:20.853 15:44:50 -- nvmf/common.sh@125 -- # return 0 00:26:20.853 15:44:50 -- nvmf/common.sh@478 -- # '[' -n 79825 ']' 00:26:20.853 15:44:50 -- nvmf/common.sh@479 -- # killprocess 79825 00:26:20.853 15:44:50 -- common/autotest_common.sh@936 -- # '[' -z 79825 ']' 00:26:20.853 15:44:50 -- common/autotest_common.sh@940 -- # kill -0 79825 00:26:20.853 15:44:50 -- common/autotest_common.sh@941 -- # uname 00:26:20.853 15:44:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:20.853 15:44:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 79825 00:26:20.853 killing process with pid 79825 00:26:20.853 15:44:50 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:20.853 15:44:50 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:20.853 15:44:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 79825' 00:26:20.853 15:44:50 -- common/autotest_common.sh@955 -- # kill 79825 00:26:20.854 [2024-04-26 15:44:50.943842] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:26:20.854 15:44:50 -- common/autotest_common.sh@960 -- # wait 79825 00:26:21.111 15:44:51 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:21.111 15:44:51 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:21.111 15:44:51 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:21.111 15:44:51 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:21.111 15:44:51 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:21.111 15:44:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:21.111 15:44:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:21.111 15:44:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:21.111 15:44:51 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:26:21.111 00:26:21.111 real 0m2.399s 00:26:21.111 user 0m6.363s 00:26:21.111 sys 0m0.656s 00:26:21.111 15:44:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:21.111 ************************************ 00:26:21.111 END TEST nvmf_aer 00:26:21.111 15:44:51 -- common/autotest_common.sh@10 -- # set +x 00:26:21.111 ************************************ 00:26:21.111 15:44:51 -- nvmf/nvmf.sh@91 -- # run_test nvmf_async_init /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:26:21.111 15:44:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:21.111 15:44:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:21.111 15:44:51 -- common/autotest_common.sh@10 -- # set +x 00:26:21.111 ************************************ 00:26:21.111 START TEST nvmf_async_init 00:26:21.111 ************************************ 00:26:21.111 15:44:51 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:26:21.370 * Looking for test storage... 00:26:21.370 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:21.370 15:44:51 -- host/async_init.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:21.370 15:44:51 -- nvmf/common.sh@7 -- # uname -s 00:26:21.370 15:44:51 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:21.370 15:44:51 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:21.370 15:44:51 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:21.370 15:44:51 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:21.370 15:44:51 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:21.370 15:44:51 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:21.370 15:44:51 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:21.370 15:44:51 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:21.370 15:44:51 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:21.370 15:44:51 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:21.370 15:44:51 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:26:21.370 15:44:51 -- nvmf/common.sh@18 -- # NVME_HOSTID=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:26:21.370 15:44:51 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:21.370 15:44:51 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:21.370 15:44:51 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:21.370 15:44:51 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:21.370 15:44:51 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:21.370 15:44:51 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:21.370 15:44:51 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:21.370 15:44:51 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:21.370 15:44:51 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.370 15:44:51 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.370 15:44:51 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.370 15:44:51 -- paths/export.sh@5 -- # export PATH 00:26:21.370 15:44:51 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:21.370 15:44:51 -- nvmf/common.sh@47 -- # : 0 00:26:21.370 15:44:51 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:21.370 15:44:51 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:21.370 15:44:51 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:21.370 15:44:51 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:21.370 15:44:51 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:21.370 15:44:51 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:21.370 15:44:51 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:21.370 15:44:51 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:21.370 15:44:51 -- host/async_init.sh@13 -- # null_bdev_size=1024 00:26:21.370 15:44:51 -- host/async_init.sh@14 -- # null_block_size=512 00:26:21.370 15:44:51 -- host/async_init.sh@15 -- # null_bdev=null0 00:26:21.370 15:44:51 -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:26:21.370 15:44:51 -- host/async_init.sh@20 -- # uuidgen 00:26:21.370 15:44:51 -- host/async_init.sh@20 -- # tr -d - 00:26:21.370 15:44:51 -- host/async_init.sh@20 -- # nguid=b46e0ecaa9254bb69ffede8fd99f48bc 00:26:21.370 15:44:51 -- host/async_init.sh@22 -- # nvmftestinit 00:26:21.370 15:44:51 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:21.370 15:44:51 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:21.370 15:44:51 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:21.370 15:44:51 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:21.370 15:44:51 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:21.370 15:44:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:21.370 15:44:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:21.370 15:44:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:21.370 15:44:51 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:26:21.370 15:44:51 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:26:21.370 15:44:51 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:26:21.370 15:44:51 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:26:21.370 15:44:51 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:26:21.370 15:44:51 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:26:21.370 15:44:51 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:21.370 15:44:51 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:21.370 15:44:51 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:21.370 15:44:51 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:26:21.370 15:44:51 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:21.370 15:44:51 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:21.370 15:44:51 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:21.370 15:44:51 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:21.370 15:44:51 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:21.370 15:44:51 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:21.370 15:44:51 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:21.370 15:44:51 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:21.370 15:44:51 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:26:21.370 15:44:51 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:26:21.370 Cannot find device "nvmf_tgt_br" 00:26:21.370 15:44:51 -- nvmf/common.sh@155 -- # true 00:26:21.370 15:44:51 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:26:21.370 Cannot find device "nvmf_tgt_br2" 00:26:21.370 15:44:51 -- nvmf/common.sh@156 -- # true 00:26:21.370 15:44:51 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:26:21.370 15:44:51 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:26:21.370 Cannot find device "nvmf_tgt_br" 00:26:21.370 15:44:51 -- nvmf/common.sh@158 -- # true 00:26:21.370 15:44:51 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:26:21.370 Cannot find device "nvmf_tgt_br2" 00:26:21.370 15:44:51 -- nvmf/common.sh@159 -- # true 00:26:21.370 15:44:51 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:26:21.370 15:44:51 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:26:21.370 15:44:51 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:21.370 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:21.370 15:44:51 -- nvmf/common.sh@162 -- # true 00:26:21.370 15:44:51 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:21.370 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:21.370 15:44:51 -- nvmf/common.sh@163 -- # true 00:26:21.370 15:44:51 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:26:21.370 15:44:51 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:21.370 15:44:51 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:21.370 15:44:51 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:21.370 15:44:51 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:21.370 15:44:51 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:21.637 15:44:51 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:21.637 15:44:51 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:21.637 15:44:51 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:21.637 15:44:51 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:26:21.637 15:44:51 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:26:21.637 15:44:51 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:26:21.637 15:44:51 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:26:21.637 15:44:51 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:21.637 15:44:51 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:21.637 15:44:51 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:21.637 15:44:51 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:26:21.637 15:44:51 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:26:21.637 15:44:51 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:26:21.637 15:44:51 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:21.637 15:44:51 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:21.637 15:44:51 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:21.637 15:44:51 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:21.637 15:44:51 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:26:21.637 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:21.637 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.079 ms 00:26:21.637 00:26:21.637 --- 10.0.0.2 ping statistics --- 00:26:21.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:21.637 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:26:21.637 15:44:51 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:26:21.637 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:21.637 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.040 ms 00:26:21.637 00:26:21.637 --- 10.0.0.3 ping statistics --- 00:26:21.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:21.637 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:26:21.637 15:44:51 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:21.637 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:21.637 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.032 ms 00:26:21.637 00:26:21.637 --- 10.0.0.1 ping statistics --- 00:26:21.637 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:21.637 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:26:21.638 15:44:51 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:21.638 15:44:51 -- nvmf/common.sh@422 -- # return 0 00:26:21.638 15:44:51 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:26:21.638 15:44:51 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:21.638 15:44:51 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:26:21.638 15:44:51 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:26:21.638 15:44:51 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:21.638 15:44:51 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:26:21.638 15:44:51 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:26:21.638 15:44:51 -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:26:21.638 15:44:51 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:21.638 15:44:51 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:21.638 15:44:51 -- common/autotest_common.sh@10 -- # set +x 00:26:21.638 15:44:51 -- nvmf/common.sh@470 -- # nvmfpid=80066 00:26:21.638 15:44:51 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:26:21.638 15:44:51 -- nvmf/common.sh@471 -- # waitforlisten 80066 00:26:21.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:21.638 15:44:51 -- common/autotest_common.sh@817 -- # '[' -z 80066 ']' 00:26:21.638 15:44:51 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:21.638 15:44:51 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:21.638 15:44:51 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:21.638 15:44:51 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:21.638 15:44:51 -- common/autotest_common.sh@10 -- # set +x 00:26:21.638 [2024-04-26 15:44:51.885339] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:26:21.638 [2024-04-26 15:44:51.885426] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:21.914 [2024-04-26 15:44:52.023960] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:21.914 [2024-04-26 15:44:52.149705] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:21.914 [2024-04-26 15:44:52.149769] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:21.914 [2024-04-26 15:44:52.149792] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:21.914 [2024-04-26 15:44:52.149803] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:21.914 [2024-04-26 15:44:52.149813] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:21.914 [2024-04-26 15:44:52.149846] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:22.849 15:44:52 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:22.849 15:44:52 -- common/autotest_common.sh@850 -- # return 0 00:26:22.849 15:44:52 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:22.849 15:44:52 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:22.849 15:44:52 -- common/autotest_common.sh@10 -- # set +x 00:26:22.849 15:44:52 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:22.849 15:44:52 -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:22.849 15:44:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:22.849 15:44:52 -- common/autotest_common.sh@10 -- # set +x 00:26:22.849 [2024-04-26 15:44:52.905876] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:22.849 15:44:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:22.849 15:44:52 -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:26:22.849 15:44:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:22.849 15:44:52 -- common/autotest_common.sh@10 -- # set +x 00:26:22.849 null0 00:26:22.849 15:44:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:22.850 15:44:52 -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:26:22.850 15:44:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:22.850 15:44:52 -- common/autotest_common.sh@10 -- # set +x 00:26:22.850 15:44:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:22.850 15:44:52 -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:26:22.850 15:44:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:22.850 15:44:52 -- common/autotest_common.sh@10 -- # set +x 00:26:22.850 15:44:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:22.850 15:44:52 -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g b46e0ecaa9254bb69ffede8fd99f48bc 00:26:22.850 15:44:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:22.850 15:44:52 -- common/autotest_common.sh@10 -- # set +x 00:26:22.850 15:44:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:22.850 15:44:52 -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:22.850 15:44:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:22.850 15:44:52 -- common/autotest_common.sh@10 -- # set +x 00:26:22.850 [2024-04-26 15:44:52.957985] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:22.850 15:44:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:22.850 15:44:52 -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:26:22.850 15:44:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:22.850 15:44:52 -- common/autotest_common.sh@10 -- # set +x 00:26:23.107 nvme0n1 00:26:23.107 15:44:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:23.107 15:44:53 -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:23.107 15:44:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:23.107 15:44:53 -- common/autotest_common.sh@10 -- # set +x 00:26:23.107 [ 00:26:23.107 { 00:26:23.107 "aliases": [ 00:26:23.107 "b46e0eca-a925-4bb6-9ffe-de8fd99f48bc" 00:26:23.107 ], 00:26:23.107 "assigned_rate_limits": { 00:26:23.107 "r_mbytes_per_sec": 0, 00:26:23.107 "rw_ios_per_sec": 0, 00:26:23.107 "rw_mbytes_per_sec": 0, 00:26:23.107 "w_mbytes_per_sec": 0 00:26:23.107 }, 00:26:23.107 "block_size": 512, 00:26:23.107 "claimed": false, 00:26:23.107 "driver_specific": { 00:26:23.107 "mp_policy": "active_passive", 00:26:23.107 "nvme": [ 00:26:23.107 { 00:26:23.107 "ctrlr_data": { 00:26:23.107 "ana_reporting": false, 00:26:23.107 "cntlid": 1, 00:26:23.107 "firmware_revision": "24.05", 00:26:23.107 "model_number": "SPDK bdev Controller", 00:26:23.107 "multi_ctrlr": true, 00:26:23.107 "oacs": { 00:26:23.107 "firmware": 0, 00:26:23.107 "format": 0, 00:26:23.107 "ns_manage": 0, 00:26:23.107 "security": 0 00:26:23.107 }, 00:26:23.107 "serial_number": "00000000000000000000", 00:26:23.107 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:23.107 "vendor_id": "0x8086" 00:26:23.107 }, 00:26:23.107 "ns_data": { 00:26:23.107 "can_share": true, 00:26:23.107 "id": 1 00:26:23.107 }, 00:26:23.107 "trid": { 00:26:23.107 "adrfam": "IPv4", 00:26:23.107 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:23.107 "traddr": "10.0.0.2", 00:26:23.107 "trsvcid": "4420", 00:26:23.107 "trtype": "TCP" 00:26:23.107 }, 00:26:23.107 "vs": { 00:26:23.107 "nvme_version": "1.3" 00:26:23.107 } 00:26:23.107 } 00:26:23.107 ] 00:26:23.107 }, 00:26:23.108 "memory_domains": [ 00:26:23.108 { 00:26:23.108 "dma_device_id": "system", 00:26:23.108 "dma_device_type": 1 00:26:23.108 } 00:26:23.108 ], 00:26:23.108 "name": "nvme0n1", 00:26:23.108 "num_blocks": 2097152, 00:26:23.108 "product_name": "NVMe disk", 00:26:23.108 "supported_io_types": { 00:26:23.108 "abort": true, 00:26:23.108 "compare": true, 00:26:23.108 "compare_and_write": true, 00:26:23.108 "flush": true, 00:26:23.108 "nvme_admin": true, 00:26:23.108 "nvme_io": true, 00:26:23.108 "read": true, 00:26:23.108 "reset": true, 00:26:23.108 "unmap": false, 00:26:23.108 "write": true, 00:26:23.108 "write_zeroes": true 00:26:23.108 }, 00:26:23.108 "uuid": "b46e0eca-a925-4bb6-9ffe-de8fd99f48bc", 00:26:23.108 "zoned": false 00:26:23.108 } 00:26:23.108 ] 00:26:23.108 15:44:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:23.108 15:44:53 -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:26:23.108 15:44:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:23.108 15:44:53 -- common/autotest_common.sh@10 -- # set +x 00:26:23.108 [2024-04-26 15:44:53.234056] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:26:23.108 [2024-04-26 15:44:53.234204] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ef42d0 (9): Bad file descriptor 00:26:23.108 [2024-04-26 15:44:53.366300] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:26:23.108 15:44:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:23.108 15:44:53 -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:23.108 15:44:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:23.108 15:44:53 -- common/autotest_common.sh@10 -- # set +x 00:26:23.108 [ 00:26:23.108 { 00:26:23.108 "aliases": [ 00:26:23.108 "b46e0eca-a925-4bb6-9ffe-de8fd99f48bc" 00:26:23.108 ], 00:26:23.108 "assigned_rate_limits": { 00:26:23.108 "r_mbytes_per_sec": 0, 00:26:23.108 "rw_ios_per_sec": 0, 00:26:23.108 "rw_mbytes_per_sec": 0, 00:26:23.108 "w_mbytes_per_sec": 0 00:26:23.108 }, 00:26:23.108 "block_size": 512, 00:26:23.108 "claimed": false, 00:26:23.108 "driver_specific": { 00:26:23.108 "mp_policy": "active_passive", 00:26:23.108 "nvme": [ 00:26:23.108 { 00:26:23.108 "ctrlr_data": { 00:26:23.108 "ana_reporting": false, 00:26:23.108 "cntlid": 2, 00:26:23.108 "firmware_revision": "24.05", 00:26:23.108 "model_number": "SPDK bdev Controller", 00:26:23.108 "multi_ctrlr": true, 00:26:23.108 "oacs": { 00:26:23.108 "firmware": 0, 00:26:23.108 "format": 0, 00:26:23.108 "ns_manage": 0, 00:26:23.108 "security": 0 00:26:23.108 }, 00:26:23.108 "serial_number": "00000000000000000000", 00:26:23.108 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:23.108 "vendor_id": "0x8086" 00:26:23.108 }, 00:26:23.108 "ns_data": { 00:26:23.108 "can_share": true, 00:26:23.108 "id": 1 00:26:23.108 }, 00:26:23.108 "trid": { 00:26:23.108 "adrfam": "IPv4", 00:26:23.108 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:23.108 "traddr": "10.0.0.2", 00:26:23.108 "trsvcid": "4420", 00:26:23.108 "trtype": "TCP" 00:26:23.108 }, 00:26:23.108 "vs": { 00:26:23.108 "nvme_version": "1.3" 00:26:23.108 } 00:26:23.108 } 00:26:23.108 ] 00:26:23.108 }, 00:26:23.108 "memory_domains": [ 00:26:23.108 { 00:26:23.108 "dma_device_id": "system", 00:26:23.108 "dma_device_type": 1 00:26:23.108 } 00:26:23.108 ], 00:26:23.108 "name": "nvme0n1", 00:26:23.108 "num_blocks": 2097152, 00:26:23.108 "product_name": "NVMe disk", 00:26:23.108 "supported_io_types": { 00:26:23.108 "abort": true, 00:26:23.108 "compare": true, 00:26:23.108 "compare_and_write": true, 00:26:23.108 "flush": true, 00:26:23.108 "nvme_admin": true, 00:26:23.108 "nvme_io": true, 00:26:23.108 "read": true, 00:26:23.366 "reset": true, 00:26:23.366 "unmap": false, 00:26:23.366 "write": true, 00:26:23.366 "write_zeroes": true 00:26:23.366 }, 00:26:23.366 "uuid": "b46e0eca-a925-4bb6-9ffe-de8fd99f48bc", 00:26:23.366 "zoned": false 00:26:23.366 } 00:26:23.366 ] 00:26:23.366 15:44:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:23.366 15:44:53 -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.366 15:44:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:23.366 15:44:53 -- common/autotest_common.sh@10 -- # set +x 00:26:23.366 15:44:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:23.366 15:44:53 -- host/async_init.sh@53 -- # mktemp 00:26:23.366 15:44:53 -- host/async_init.sh@53 -- # key_path=/tmp/tmp.pC1Su2Pw5a 00:26:23.366 15:44:53 -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:26:23.366 15:44:53 -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.pC1Su2Pw5a 00:26:23.366 15:44:53 -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:26:23.366 15:44:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:23.366 15:44:53 -- common/autotest_common.sh@10 -- # set +x 00:26:23.366 15:44:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:23.366 15:44:53 -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:26:23.366 15:44:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:23.366 15:44:53 -- common/autotest_common.sh@10 -- # set +x 00:26:23.366 [2024-04-26 15:44:53.446196] tcp.c: 926:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:26:23.366 [2024-04-26 15:44:53.446351] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:23.366 15:44:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:23.366 15:44:53 -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.pC1Su2Pw5a 00:26:23.366 15:44:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:23.366 15:44:53 -- common/autotest_common.sh@10 -- # set +x 00:26:23.366 [2024-04-26 15:44:53.454192] tcp.c:3655:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:26:23.366 15:44:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:23.366 15:44:53 -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.pC1Su2Pw5a 00:26:23.366 15:44:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:23.366 15:44:53 -- common/autotest_common.sh@10 -- # set +x 00:26:23.366 [2024-04-26 15:44:53.462194] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:26:23.366 [2024-04-26 15:44:53.462407] nvme_tcp.c:2577:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:26:23.366 nvme0n1 00:26:23.366 15:44:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:23.366 15:44:53 -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:26:23.366 15:44:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:23.366 15:44:53 -- common/autotest_common.sh@10 -- # set +x 00:26:23.366 [ 00:26:23.366 { 00:26:23.366 "aliases": [ 00:26:23.366 "b46e0eca-a925-4bb6-9ffe-de8fd99f48bc" 00:26:23.366 ], 00:26:23.366 "assigned_rate_limits": { 00:26:23.366 "r_mbytes_per_sec": 0, 00:26:23.366 "rw_ios_per_sec": 0, 00:26:23.366 "rw_mbytes_per_sec": 0, 00:26:23.366 "w_mbytes_per_sec": 0 00:26:23.366 }, 00:26:23.366 "block_size": 512, 00:26:23.366 "claimed": false, 00:26:23.366 "driver_specific": { 00:26:23.366 "mp_policy": "active_passive", 00:26:23.366 "nvme": [ 00:26:23.366 { 00:26:23.366 "ctrlr_data": { 00:26:23.366 "ana_reporting": false, 00:26:23.366 "cntlid": 3, 00:26:23.366 "firmware_revision": "24.05", 00:26:23.366 "model_number": "SPDK bdev Controller", 00:26:23.366 "multi_ctrlr": true, 00:26:23.366 "oacs": { 00:26:23.366 "firmware": 0, 00:26:23.366 "format": 0, 00:26:23.366 "ns_manage": 0, 00:26:23.366 "security": 0 00:26:23.366 }, 00:26:23.366 "serial_number": "00000000000000000000", 00:26:23.366 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:23.366 "vendor_id": "0x8086" 00:26:23.366 }, 00:26:23.366 "ns_data": { 00:26:23.366 "can_share": true, 00:26:23.366 "id": 1 00:26:23.366 }, 00:26:23.366 "trid": { 00:26:23.366 "adrfam": "IPv4", 00:26:23.366 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:23.366 "traddr": "10.0.0.2", 00:26:23.366 "trsvcid": "4421", 00:26:23.366 "trtype": "TCP" 00:26:23.366 }, 00:26:23.366 "vs": { 00:26:23.366 "nvme_version": "1.3" 00:26:23.366 } 00:26:23.366 } 00:26:23.366 ] 00:26:23.366 }, 00:26:23.366 "memory_domains": [ 00:26:23.366 { 00:26:23.366 "dma_device_id": "system", 00:26:23.366 "dma_device_type": 1 00:26:23.366 } 00:26:23.366 ], 00:26:23.366 "name": "nvme0n1", 00:26:23.366 "num_blocks": 2097152, 00:26:23.366 "product_name": "NVMe disk", 00:26:23.366 "supported_io_types": { 00:26:23.366 "abort": true, 00:26:23.366 "compare": true, 00:26:23.366 "compare_and_write": true, 00:26:23.366 "flush": true, 00:26:23.366 "nvme_admin": true, 00:26:23.366 "nvme_io": true, 00:26:23.366 "read": true, 00:26:23.366 "reset": true, 00:26:23.366 "unmap": false, 00:26:23.366 "write": true, 00:26:23.366 "write_zeroes": true 00:26:23.366 }, 00:26:23.366 "uuid": "b46e0eca-a925-4bb6-9ffe-de8fd99f48bc", 00:26:23.366 "zoned": false 00:26:23.366 } 00:26:23.366 ] 00:26:23.366 15:44:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:23.366 15:44:53 -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:26:23.366 15:44:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:23.366 15:44:53 -- common/autotest_common.sh@10 -- # set +x 00:26:23.366 15:44:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:23.367 15:44:53 -- host/async_init.sh@75 -- # rm -f /tmp/tmp.pC1Su2Pw5a 00:26:23.367 15:44:53 -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:26:23.367 15:44:53 -- host/async_init.sh@78 -- # nvmftestfini 00:26:23.367 15:44:53 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:23.367 15:44:53 -- nvmf/common.sh@117 -- # sync 00:26:23.367 15:44:53 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:23.367 15:44:53 -- nvmf/common.sh@120 -- # set +e 00:26:23.367 15:44:53 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:23.367 15:44:53 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:23.367 rmmod nvme_tcp 00:26:23.367 rmmod nvme_fabrics 00:26:23.624 rmmod nvme_keyring 00:26:23.624 15:44:53 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:23.624 15:44:53 -- nvmf/common.sh@124 -- # set -e 00:26:23.625 15:44:53 -- nvmf/common.sh@125 -- # return 0 00:26:23.625 15:44:53 -- nvmf/common.sh@478 -- # '[' -n 80066 ']' 00:26:23.625 15:44:53 -- nvmf/common.sh@479 -- # killprocess 80066 00:26:23.625 15:44:53 -- common/autotest_common.sh@936 -- # '[' -z 80066 ']' 00:26:23.625 15:44:53 -- common/autotest_common.sh@940 -- # kill -0 80066 00:26:23.625 15:44:53 -- common/autotest_common.sh@941 -- # uname 00:26:23.625 15:44:53 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:23.625 15:44:53 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80066 00:26:23.625 killing process with pid 80066 00:26:23.625 15:44:53 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:23.625 15:44:53 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:23.625 15:44:53 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80066' 00:26:23.625 15:44:53 -- common/autotest_common.sh@955 -- # kill 80066 00:26:23.625 [2024-04-26 15:44:53.715898] app.c: 937:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:26:23.625 [2024-04-26 15:44:53.715941] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:26:23.625 15:44:53 -- common/autotest_common.sh@960 -- # wait 80066 00:26:23.883 15:44:53 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:23.883 15:44:53 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:23.883 15:44:53 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:23.883 15:44:53 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:23.883 15:44:53 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:23.883 15:44:53 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:23.883 15:44:53 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:23.883 15:44:53 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:23.883 15:44:53 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:26:23.883 00:26:23.883 real 0m2.642s 00:26:23.883 user 0m2.510s 00:26:23.883 sys 0m0.571s 00:26:23.883 ************************************ 00:26:23.883 END TEST nvmf_async_init 00:26:23.883 ************************************ 00:26:23.883 15:44:53 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:23.883 15:44:53 -- common/autotest_common.sh@10 -- # set +x 00:26:23.883 15:44:54 -- nvmf/nvmf.sh@92 -- # run_test dma /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:26:23.883 15:44:54 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:23.883 15:44:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:23.883 15:44:54 -- common/autotest_common.sh@10 -- # set +x 00:26:23.883 ************************************ 00:26:23.883 START TEST dma 00:26:23.883 ************************************ 00:26:23.883 15:44:54 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/dma.sh --transport=tcp 00:26:23.883 * Looking for test storage... 00:26:24.141 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:24.141 15:44:54 -- host/dma.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:24.141 15:44:54 -- nvmf/common.sh@7 -- # uname -s 00:26:24.141 15:44:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:24.141 15:44:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:24.141 15:44:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:24.141 15:44:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:24.141 15:44:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:24.141 15:44:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:24.141 15:44:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:24.141 15:44:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:24.141 15:44:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:24.141 15:44:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:24.141 15:44:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:26:24.141 15:44:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:26:24.141 15:44:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:24.141 15:44:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:24.141 15:44:54 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:24.141 15:44:54 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:24.141 15:44:54 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:24.141 15:44:54 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:24.141 15:44:54 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:24.141 15:44:54 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:24.141 15:44:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.141 15:44:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.141 15:44:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.141 15:44:54 -- paths/export.sh@5 -- # export PATH 00:26:24.142 15:44:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.142 15:44:54 -- nvmf/common.sh@47 -- # : 0 00:26:24.142 15:44:54 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:24.142 15:44:54 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:24.142 15:44:54 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:24.142 15:44:54 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:24.142 15:44:54 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:24.142 15:44:54 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:24.142 15:44:54 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:24.142 15:44:54 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:24.142 15:44:54 -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:26:24.142 15:44:54 -- host/dma.sh@13 -- # exit 0 00:26:24.142 ************************************ 00:26:24.142 END TEST dma 00:26:24.142 ************************************ 00:26:24.142 00:26:24.142 real 0m0.099s 00:26:24.142 user 0m0.045s 00:26:24.142 sys 0m0.060s 00:26:24.142 15:44:54 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:24.142 15:44:54 -- common/autotest_common.sh@10 -- # set +x 00:26:24.142 15:44:54 -- nvmf/nvmf.sh@95 -- # run_test nvmf_identify /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:26:24.142 15:44:54 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:24.142 15:44:54 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:24.142 15:44:54 -- common/autotest_common.sh@10 -- # set +x 00:26:24.142 ************************************ 00:26:24.142 START TEST nvmf_identify 00:26:24.142 ************************************ 00:26:24.142 15:44:54 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify.sh --transport=tcp 00:26:24.142 * Looking for test storage... 00:26:24.142 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:24.142 15:44:54 -- host/identify.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:24.142 15:44:54 -- nvmf/common.sh@7 -- # uname -s 00:26:24.142 15:44:54 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:24.142 15:44:54 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:24.142 15:44:54 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:24.142 15:44:54 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:24.142 15:44:54 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:24.142 15:44:54 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:24.142 15:44:54 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:24.142 15:44:54 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:24.142 15:44:54 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:24.142 15:44:54 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:24.142 15:44:54 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:26:24.142 15:44:54 -- nvmf/common.sh@18 -- # NVME_HOSTID=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:26:24.142 15:44:54 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:24.142 15:44:54 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:24.142 15:44:54 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:24.142 15:44:54 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:24.142 15:44:54 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:24.142 15:44:54 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:24.142 15:44:54 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:24.142 15:44:54 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:24.142 15:44:54 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.142 15:44:54 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.142 15:44:54 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.142 15:44:54 -- paths/export.sh@5 -- # export PATH 00:26:24.142 15:44:54 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:24.142 15:44:54 -- nvmf/common.sh@47 -- # : 0 00:26:24.142 15:44:54 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:24.142 15:44:54 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:24.142 15:44:54 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:24.142 15:44:54 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:24.142 15:44:54 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:24.142 15:44:54 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:24.142 15:44:54 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:24.142 15:44:54 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:24.142 15:44:54 -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:24.142 15:44:54 -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:24.142 15:44:54 -- host/identify.sh@14 -- # nvmftestinit 00:26:24.142 15:44:54 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:24.401 15:44:54 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:24.401 15:44:54 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:24.401 15:44:54 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:24.401 15:44:54 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:24.401 15:44:54 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:24.401 15:44:54 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:24.401 15:44:54 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:24.401 15:44:54 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:26:24.401 15:44:54 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:26:24.401 15:44:54 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:26:24.401 15:44:54 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:26:24.401 15:44:54 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:26:24.401 15:44:54 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:26:24.401 15:44:54 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:24.401 15:44:54 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:24.401 15:44:54 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:24.401 15:44:54 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:26:24.401 15:44:54 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:24.401 15:44:54 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:24.401 15:44:54 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:24.401 15:44:54 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:24.401 15:44:54 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:24.401 15:44:54 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:24.401 15:44:54 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:24.401 15:44:54 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:24.401 15:44:54 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:26:24.401 15:44:54 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:26:24.401 Cannot find device "nvmf_tgt_br" 00:26:24.401 15:44:54 -- nvmf/common.sh@155 -- # true 00:26:24.401 15:44:54 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:26:24.401 Cannot find device "nvmf_tgt_br2" 00:26:24.401 15:44:54 -- nvmf/common.sh@156 -- # true 00:26:24.401 15:44:54 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:26:24.401 15:44:54 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:26:24.401 Cannot find device "nvmf_tgt_br" 00:26:24.401 15:44:54 -- nvmf/common.sh@158 -- # true 00:26:24.401 15:44:54 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:26:24.401 Cannot find device "nvmf_tgt_br2" 00:26:24.401 15:44:54 -- nvmf/common.sh@159 -- # true 00:26:24.401 15:44:54 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:26:24.401 15:44:54 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:26:24.401 15:44:54 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:24.401 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:24.401 15:44:54 -- nvmf/common.sh@162 -- # true 00:26:24.401 15:44:54 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:24.401 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:24.401 15:44:54 -- nvmf/common.sh@163 -- # true 00:26:24.401 15:44:54 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:26:24.401 15:44:54 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:24.401 15:44:54 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:24.401 15:44:54 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:24.401 15:44:54 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:24.401 15:44:54 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:24.401 15:44:54 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:24.658 15:44:54 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:24.658 15:44:54 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:24.658 15:44:54 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:26:24.658 15:44:54 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:26:24.658 15:44:54 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:26:24.658 15:44:54 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:26:24.658 15:44:54 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:24.658 15:44:54 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:24.658 15:44:54 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:24.658 15:44:54 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:26:24.658 15:44:54 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:26:24.658 15:44:54 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:26:24.658 15:44:54 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:24.658 15:44:54 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:24.658 15:44:54 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:24.658 15:44:54 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:24.658 15:44:54 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:26:24.658 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:24.658 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:26:24.658 00:26:24.658 --- 10.0.0.2 ping statistics --- 00:26:24.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:24.659 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:26:24.659 15:44:54 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:26:24.659 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:24.659 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:26:24.659 00:26:24.659 --- 10.0.0.3 ping statistics --- 00:26:24.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:24.659 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:26:24.659 15:44:54 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:24.659 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:24.659 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:26:24.659 00:26:24.659 --- 10.0.0.1 ping statistics --- 00:26:24.659 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:24.659 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:26:24.659 15:44:54 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:24.659 15:44:54 -- nvmf/common.sh@422 -- # return 0 00:26:24.659 15:44:54 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:26:24.659 15:44:54 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:24.659 15:44:54 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:26:24.659 15:44:54 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:26:24.659 15:44:54 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:24.659 15:44:54 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:26:24.659 15:44:54 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:26:24.659 15:44:54 -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:26:24.659 15:44:54 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:24.659 15:44:54 -- common/autotest_common.sh@10 -- # set +x 00:26:24.659 15:44:54 -- host/identify.sh@19 -- # nvmfpid=80349 00:26:24.659 15:44:54 -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:24.659 15:44:54 -- host/identify.sh@18 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:24.659 15:44:54 -- host/identify.sh@23 -- # waitforlisten 80349 00:26:24.659 15:44:54 -- common/autotest_common.sh@817 -- # '[' -z 80349 ']' 00:26:24.659 15:44:54 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:24.659 15:44:54 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:24.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:24.659 15:44:54 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:24.659 15:44:54 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:24.659 15:44:54 -- common/autotest_common.sh@10 -- # set +x 00:26:24.659 [2024-04-26 15:44:54.886237] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:26:24.659 [2024-04-26 15:44:54.886318] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:24.918 [2024-04-26 15:44:55.027521] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:24.918 [2024-04-26 15:44:55.153382] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:24.918 [2024-04-26 15:44:55.153685] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:24.918 [2024-04-26 15:44:55.153890] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:24.918 [2024-04-26 15:44:55.154015] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:24.918 [2024-04-26 15:44:55.154029] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:24.918 [2024-04-26 15:44:55.154209] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:24.918 [2024-04-26 15:44:55.154331] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:24.918 [2024-04-26 15:44:55.154466] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:24.918 [2024-04-26 15:44:55.154473] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:25.850 15:44:55 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:25.850 15:44:55 -- common/autotest_common.sh@850 -- # return 0 00:26:25.850 15:44:55 -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:25.850 15:44:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:25.850 15:44:55 -- common/autotest_common.sh@10 -- # set +x 00:26:25.850 [2024-04-26 15:44:55.932688] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:25.850 15:44:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:25.850 15:44:55 -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:26:25.850 15:44:55 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:25.850 15:44:55 -- common/autotest_common.sh@10 -- # set +x 00:26:25.850 15:44:55 -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:25.850 15:44:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:25.850 15:44:55 -- common/autotest_common.sh@10 -- # set +x 00:26:25.850 Malloc0 00:26:25.850 15:44:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:25.850 15:44:56 -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:25.850 15:44:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:25.850 15:44:56 -- common/autotest_common.sh@10 -- # set +x 00:26:25.850 15:44:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:25.850 15:44:56 -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:26:25.850 15:44:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:25.850 15:44:56 -- common/autotest_common.sh@10 -- # set +x 00:26:25.850 15:44:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:25.850 15:44:56 -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:25.850 15:44:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:25.850 15:44:56 -- common/autotest_common.sh@10 -- # set +x 00:26:25.850 [2024-04-26 15:44:56.042462] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:25.850 15:44:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:25.850 15:44:56 -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:25.850 15:44:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:25.850 15:44:56 -- common/autotest_common.sh@10 -- # set +x 00:26:25.850 15:44:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:25.850 15:44:56 -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:26:25.850 15:44:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:25.850 15:44:56 -- common/autotest_common.sh@10 -- # set +x 00:26:25.850 [2024-04-26 15:44:56.058234] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:26:25.850 [ 00:26:25.850 { 00:26:25.850 "allow_any_host": true, 00:26:25.850 "hosts": [], 00:26:25.850 "listen_addresses": [ 00:26:25.850 { 00:26:25.850 "adrfam": "IPv4", 00:26:25.850 "traddr": "10.0.0.2", 00:26:25.850 "transport": "TCP", 00:26:25.850 "trsvcid": "4420", 00:26:25.850 "trtype": "TCP" 00:26:25.850 } 00:26:25.850 ], 00:26:25.850 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:26:25.851 "subtype": "Discovery" 00:26:25.851 }, 00:26:25.851 { 00:26:25.851 "allow_any_host": true, 00:26:25.851 "hosts": [], 00:26:25.851 "listen_addresses": [ 00:26:25.851 { 00:26:25.851 "adrfam": "IPv4", 00:26:25.851 "traddr": "10.0.0.2", 00:26:25.851 "transport": "TCP", 00:26:25.851 "trsvcid": "4420", 00:26:25.851 "trtype": "TCP" 00:26:25.851 } 00:26:25.851 ], 00:26:25.851 "max_cntlid": 65519, 00:26:25.851 "max_namespaces": 32, 00:26:25.851 "min_cntlid": 1, 00:26:25.851 "model_number": "SPDK bdev Controller", 00:26:25.851 "namespaces": [ 00:26:25.851 { 00:26:25.851 "bdev_name": "Malloc0", 00:26:25.851 "eui64": "ABCDEF0123456789", 00:26:25.851 "name": "Malloc0", 00:26:25.851 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:26:25.851 "nsid": 1, 00:26:25.851 "uuid": "0e6b32b8-19d6-4ac1-9e7a-289bae09eb7b" 00:26:25.851 } 00:26:25.851 ], 00:26:25.851 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:26:25.851 "serial_number": "SPDK00000000000001", 00:26:25.851 "subtype": "NVMe" 00:26:25.851 } 00:26:25.851 ] 00:26:25.851 15:44:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:25.851 15:44:56 -- host/identify.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:26:25.851 [2024-04-26 15:44:56.092795] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:26:25.851 [2024-04-26 15:44:56.092962] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80402 ] 00:26:26.114 [2024-04-26 15:44:56.227306] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:26:26.114 [2024-04-26 15:44:56.227377] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:26:26.114 [2024-04-26 15:44:56.227385] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:26:26.114 [2024-04-26 15:44:56.227399] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:26:26.114 [2024-04-26 15:44:56.227413] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:26:26.114 [2024-04-26 15:44:56.227565] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:26:26.114 [2024-04-26 15:44:56.227617] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xd56300 0 00:26:26.114 [2024-04-26 15:44:56.233164] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:26:26.114 [2024-04-26 15:44:56.233190] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:26:26.114 [2024-04-26 15:44:56.233197] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:26:26.114 [2024-04-26 15:44:56.233201] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:26:26.114 [2024-04-26 15:44:56.233250] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.114 [2024-04-26 15:44:56.233258] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.114 [2024-04-26 15:44:56.233262] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd56300) 00:26:26.114 [2024-04-26 15:44:56.233277] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:26:26.114 [2024-04-26 15:44:56.233309] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd9e9c0, cid 0, qid 0 00:26:26.114 [2024-04-26 15:44:56.241155] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.114 [2024-04-26 15:44:56.241178] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.114 [2024-04-26 15:44:56.241184] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.114 [2024-04-26 15:44:56.241189] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd9e9c0) on tqpair=0xd56300 00:26:26.114 [2024-04-26 15:44:56.241201] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:26:26.114 [2024-04-26 15:44:56.241209] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:26:26.114 [2024-04-26 15:44:56.241216] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:26:26.114 [2024-04-26 15:44:56.241235] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.114 [2024-04-26 15:44:56.241241] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.114 [2024-04-26 15:44:56.241246] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd56300) 00:26:26.114 [2024-04-26 15:44:56.241256] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.114 [2024-04-26 15:44:56.241286] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd9e9c0, cid 0, qid 0 00:26:26.114 [2024-04-26 15:44:56.241364] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.114 [2024-04-26 15:44:56.241371] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.114 [2024-04-26 15:44:56.241375] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.114 [2024-04-26 15:44:56.241379] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd9e9c0) on tqpair=0xd56300 00:26:26.114 [2024-04-26 15:44:56.241390] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:26:26.114 [2024-04-26 15:44:56.241399] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:26:26.114 [2024-04-26 15:44:56.241407] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.114 [2024-04-26 15:44:56.241412] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.114 [2024-04-26 15:44:56.241416] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd56300) 00:26:26.114 [2024-04-26 15:44:56.241424] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.114 [2024-04-26 15:44:56.241444] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd9e9c0, cid 0, qid 0 00:26:26.114 [2024-04-26 15:44:56.241504] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.114 [2024-04-26 15:44:56.241511] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.114 [2024-04-26 15:44:56.241515] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.114 [2024-04-26 15:44:56.241519] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd9e9c0) on tqpair=0xd56300 00:26:26.114 [2024-04-26 15:44:56.241525] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:26:26.114 [2024-04-26 15:44:56.241534] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:26:26.114 [2024-04-26 15:44:56.241550] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.114 [2024-04-26 15:44:56.241554] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.114 [2024-04-26 15:44:56.241558] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd56300) 00:26:26.114 [2024-04-26 15:44:56.241566] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.114 [2024-04-26 15:44:56.241584] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd9e9c0, cid 0, qid 0 00:26:26.114 [2024-04-26 15:44:56.241643] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.114 [2024-04-26 15:44:56.241649] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.114 [2024-04-26 15:44:56.241653] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.114 [2024-04-26 15:44:56.241657] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd9e9c0) on tqpair=0xd56300 00:26:26.114 [2024-04-26 15:44:56.241663] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:26:26.114 [2024-04-26 15:44:56.241674] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.114 [2024-04-26 15:44:56.241679] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.114 [2024-04-26 15:44:56.241682] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd56300) 00:26:26.114 [2024-04-26 15:44:56.241690] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.114 [2024-04-26 15:44:56.241709] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd9e9c0, cid 0, qid 0 00:26:26.114 [2024-04-26 15:44:56.241770] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.114 [2024-04-26 15:44:56.241777] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.114 [2024-04-26 15:44:56.241781] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.114 [2024-04-26 15:44:56.241785] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd9e9c0) on tqpair=0xd56300 00:26:26.115 [2024-04-26 15:44:56.241790] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:26:26.115 [2024-04-26 15:44:56.241796] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:26:26.115 [2024-04-26 15:44:56.241805] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:26:26.115 [2024-04-26 15:44:56.241911] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:26:26.115 [2024-04-26 15:44:56.241917] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:26:26.115 [2024-04-26 15:44:56.241926] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.115 [2024-04-26 15:44:56.241931] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.115 [2024-04-26 15:44:56.241935] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd56300) 00:26:26.115 [2024-04-26 15:44:56.241943] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.115 [2024-04-26 15:44:56.241962] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd9e9c0, cid 0, qid 0 00:26:26.115 [2024-04-26 15:44:56.242019] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.115 [2024-04-26 15:44:56.242026] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.115 [2024-04-26 15:44:56.242030] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.115 [2024-04-26 15:44:56.242034] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd9e9c0) on tqpair=0xd56300 00:26:26.115 [2024-04-26 15:44:56.242040] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:26:26.115 [2024-04-26 15:44:56.242059] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.115 [2024-04-26 15:44:56.242063] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.115 [2024-04-26 15:44:56.242067] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd56300) 00:26:26.115 [2024-04-26 15:44:56.242075] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.115 [2024-04-26 15:44:56.242093] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd9e9c0, cid 0, qid 0 00:26:26.115 [2024-04-26 15:44:56.242167] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.115 [2024-04-26 15:44:56.242176] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.115 [2024-04-26 15:44:56.242180] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.115 [2024-04-26 15:44:56.242184] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd9e9c0) on tqpair=0xd56300 00:26:26.115 [2024-04-26 15:44:56.242190] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:26:26.115 [2024-04-26 15:44:56.242195] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:26:26.115 [2024-04-26 15:44:56.242204] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:26:26.115 [2024-04-26 15:44:56.242214] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:26:26.115 [2024-04-26 15:44:56.242226] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.115 [2024-04-26 15:44:56.242230] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd56300) 00:26:26.115 [2024-04-26 15:44:56.242239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.115 [2024-04-26 15:44:56.242261] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd9e9c0, cid 0, qid 0 00:26:26.115 [2024-04-26 15:44:56.242365] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:26.115 [2024-04-26 15:44:56.242372] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:26.115 [2024-04-26 15:44:56.242376] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:26.115 [2024-04-26 15:44:56.242381] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd56300): datao=0, datal=4096, cccid=0 00:26:26.115 [2024-04-26 15:44:56.242386] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd9e9c0) on tqpair(0xd56300): expected_datao=0, payload_size=4096 00:26:26.115 [2024-04-26 15:44:56.242391] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.115 [2024-04-26 15:44:56.242400] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:26.115 [2024-04-26 15:44:56.242405] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:26.115 [2024-04-26 15:44:56.242414] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.115 [2024-04-26 15:44:56.242420] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.115 [2024-04-26 15:44:56.242424] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.115 [2024-04-26 15:44:56.242428] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd9e9c0) on tqpair=0xd56300 00:26:26.115 [2024-04-26 15:44:56.242437] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:26:26.115 [2024-04-26 15:44:56.242442] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:26:26.115 [2024-04-26 15:44:56.242447] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:26:26.115 [2024-04-26 15:44:56.242458] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:26:26.115 [2024-04-26 15:44:56.242463] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:26:26.115 [2024-04-26 15:44:56.242469] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:26:26.115 [2024-04-26 15:44:56.242478] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:26:26.115 [2024-04-26 15:44:56.242486] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.115 [2024-04-26 15:44:56.242491] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.115 [2024-04-26 15:44:56.242495] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd56300) 00:26:26.115 [2024-04-26 15:44:56.242503] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:26.115 [2024-04-26 15:44:56.242524] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd9e9c0, cid 0, qid 0 00:26:26.115 [2024-04-26 15:44:56.242591] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.115 [2024-04-26 15:44:56.242598] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.115 [2024-04-26 15:44:56.242602] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.115 [2024-04-26 15:44:56.242606] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd9e9c0) on tqpair=0xd56300 00:26:26.115 [2024-04-26 15:44:56.242614] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.115 [2024-04-26 15:44:56.242619] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.115 [2024-04-26 15:44:56.242623] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xd56300) 00:26:26.115 [2024-04-26 15:44:56.242629] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:26.115 [2024-04-26 15:44:56.242636] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.115 [2024-04-26 15:44:56.242640] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.115 [2024-04-26 15:44:56.242644] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xd56300) 00:26:26.115 [2024-04-26 15:44:56.242650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:26.115 [2024-04-26 15:44:56.242658] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.115 [2024-04-26 15:44:56.242662] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.115 [2024-04-26 15:44:56.242666] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xd56300) 00:26:26.115 [2024-04-26 15:44:56.242672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:26.115 [2024-04-26 15:44:56.242678] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.115 [2024-04-26 15:44:56.242682] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.115 [2024-04-26 15:44:56.242686] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd56300) 00:26:26.115 [2024-04-26 15:44:56.242692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:26.115 [2024-04-26 15:44:56.242698] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:26:26.115 [2024-04-26 15:44:56.242711] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:26:26.115 [2024-04-26 15:44:56.242719] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.115 [2024-04-26 15:44:56.242723] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd56300) 00:26:26.115 [2024-04-26 15:44:56.242731] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.115 [2024-04-26 15:44:56.242751] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd9e9c0, cid 0, qid 0 00:26:26.115 [2024-04-26 15:44:56.242759] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd9eb20, cid 1, qid 0 00:26:26.115 [2024-04-26 15:44:56.242764] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd9ec80, cid 2, qid 0 00:26:26.115 [2024-04-26 15:44:56.242768] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd9ede0, cid 3, qid 0 00:26:26.115 [2024-04-26 15:44:56.242773] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd9ef40, cid 4, qid 0 00:26:26.115 [2024-04-26 15:44:56.242871] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.115 [2024-04-26 15:44:56.242878] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.115 [2024-04-26 15:44:56.242882] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.115 [2024-04-26 15:44:56.242886] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd9ef40) on tqpair=0xd56300 00:26:26.115 [2024-04-26 15:44:56.242892] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:26:26.115 [2024-04-26 15:44:56.242898] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:26:26.115 [2024-04-26 15:44:56.242909] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.115 [2024-04-26 15:44:56.242914] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd56300) 00:26:26.115 [2024-04-26 15:44:56.242922] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.116 [2024-04-26 15:44:56.242941] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd9ef40, cid 4, qid 0 00:26:26.116 [2024-04-26 15:44:56.243011] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:26.116 [2024-04-26 15:44:56.243018] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:26.116 [2024-04-26 15:44:56.243022] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:26.116 [2024-04-26 15:44:56.243026] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd56300): datao=0, datal=4096, cccid=4 00:26:26.116 [2024-04-26 15:44:56.243031] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd9ef40) on tqpair(0xd56300): expected_datao=0, payload_size=4096 00:26:26.116 [2024-04-26 15:44:56.243035] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.116 [2024-04-26 15:44:56.243043] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:26.116 [2024-04-26 15:44:56.243047] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:26.116 [2024-04-26 15:44:56.243055] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.116 [2024-04-26 15:44:56.243061] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.116 [2024-04-26 15:44:56.243065] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.116 [2024-04-26 15:44:56.243069] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd9ef40) on tqpair=0xd56300 00:26:26.116 [2024-04-26 15:44:56.243083] nvme_ctrlr.c:4036:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:26:26.116 [2024-04-26 15:44:56.243115] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.116 [2024-04-26 15:44:56.243122] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd56300) 00:26:26.116 [2024-04-26 15:44:56.243130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.116 [2024-04-26 15:44:56.243149] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.116 [2024-04-26 15:44:56.243155] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.116 [2024-04-26 15:44:56.243159] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xd56300) 00:26:26.116 [2024-04-26 15:44:56.243166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:26:26.116 [2024-04-26 15:44:56.243195] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd9ef40, cid 4, qid 0 00:26:26.116 [2024-04-26 15:44:56.243203] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd9f0a0, cid 5, qid 0 00:26:26.116 [2024-04-26 15:44:56.243310] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:26.116 [2024-04-26 15:44:56.243317] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:26.116 [2024-04-26 15:44:56.243322] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:26.116 [2024-04-26 15:44:56.243325] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd56300): datao=0, datal=1024, cccid=4 00:26:26.116 [2024-04-26 15:44:56.243330] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd9ef40) on tqpair(0xd56300): expected_datao=0, payload_size=1024 00:26:26.116 [2024-04-26 15:44:56.243335] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.116 [2024-04-26 15:44:56.243342] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:26.116 [2024-04-26 15:44:56.243346] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:26.116 [2024-04-26 15:44:56.243352] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.116 [2024-04-26 15:44:56.243358] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.116 [2024-04-26 15:44:56.243362] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.116 [2024-04-26 15:44:56.243366] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd9f0a0) on tqpair=0xd56300 00:26:26.116 [2024-04-26 15:44:56.284228] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.116 [2024-04-26 15:44:56.284252] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.116 [2024-04-26 15:44:56.284258] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.116 [2024-04-26 15:44:56.284263] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd9ef40) on tqpair=0xd56300 00:26:26.116 [2024-04-26 15:44:56.284284] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.116 [2024-04-26 15:44:56.284291] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd56300) 00:26:26.116 [2024-04-26 15:44:56.284300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.116 [2024-04-26 15:44:56.284329] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd9ef40, cid 4, qid 0 00:26:26.116 [2024-04-26 15:44:56.284437] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:26.116 [2024-04-26 15:44:56.284446] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:26.116 [2024-04-26 15:44:56.284450] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:26.116 [2024-04-26 15:44:56.284465] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd56300): datao=0, datal=3072, cccid=4 00:26:26.116 [2024-04-26 15:44:56.284470] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd9ef40) on tqpair(0xd56300): expected_datao=0, payload_size=3072 00:26:26.116 [2024-04-26 15:44:56.284475] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.116 [2024-04-26 15:44:56.284483] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:26.116 [2024-04-26 15:44:56.284487] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:26.116 [2024-04-26 15:44:56.284495] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.116 [2024-04-26 15:44:56.284502] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.116 [2024-04-26 15:44:56.284505] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.116 [2024-04-26 15:44:56.284510] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd9ef40) on tqpair=0xd56300 00:26:26.116 [2024-04-26 15:44:56.284520] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.116 [2024-04-26 15:44:56.284525] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xd56300) 00:26:26.116 [2024-04-26 15:44:56.284533] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.116 [2024-04-26 15:44:56.284559] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd9ef40, cid 4, qid 0 00:26:26.116 [2024-04-26 15:44:56.284638] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:26.116 [2024-04-26 15:44:56.284645] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:26.116 [2024-04-26 15:44:56.284649] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:26.116 [2024-04-26 15:44:56.284653] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xd56300): datao=0, datal=8, cccid=4 00:26:26.116 [2024-04-26 15:44:56.284658] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xd9ef40) on tqpair(0xd56300): expected_datao=0, payload_size=8 00:26:26.116 [2024-04-26 15:44:56.284663] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.116 [2024-04-26 15:44:56.284670] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:26.116 [2024-04-26 15:44:56.284674] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:26.116 ===================================================== 00:26:26.116 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:26:26.116 ===================================================== 00:26:26.116 Controller Capabilities/Features 00:26:26.116 ================================ 00:26:26.116 Vendor ID: 0000 00:26:26.116 Subsystem Vendor ID: 0000 00:26:26.116 Serial Number: .................... 00:26:26.116 Model Number: ........................................ 00:26:26.116 Firmware Version: 24.05 00:26:26.116 Recommended Arb Burst: 0 00:26:26.116 IEEE OUI Identifier: 00 00 00 00:26:26.116 Multi-path I/O 00:26:26.116 May have multiple subsystem ports: No 00:26:26.116 May have multiple controllers: No 00:26:26.116 Associated with SR-IOV VF: No 00:26:26.116 Max Data Transfer Size: 131072 00:26:26.116 Max Number of Namespaces: 0 00:26:26.116 Max Number of I/O Queues: 1024 00:26:26.116 NVMe Specification Version (VS): 1.3 00:26:26.116 NVMe Specification Version (Identify): 1.3 00:26:26.116 Maximum Queue Entries: 128 00:26:26.116 Contiguous Queues Required: Yes 00:26:26.116 Arbitration Mechanisms Supported 00:26:26.116 Weighted Round Robin: Not Supported 00:26:26.116 Vendor Specific: Not Supported 00:26:26.116 Reset Timeout: 15000 ms 00:26:26.116 Doorbell Stride: 4 bytes 00:26:26.116 NVM Subsystem Reset: Not Supported 00:26:26.116 Command Sets Supported 00:26:26.116 NVM Command Set: Supported 00:26:26.116 Boot Partition: Not Supported 00:26:26.116 Memory Page Size Minimum: 4096 bytes 00:26:26.116 Memory Page Size Maximum: 4096 bytes 00:26:26.116 Persistent Memory Region: Not Supported 00:26:26.116 Optional Asynchronous Events Supported 00:26:26.116 Namespace Attribute Notices: Not Supported 00:26:26.116 Firmware Activation Notices: Not Supported 00:26:26.116 ANA Change Notices: Not Supported 00:26:26.116 PLE Aggregate Log Change Notices: Not Supported 00:26:26.116 LBA Status Info Alert Notices: Not Supported 00:26:26.116 EGE Aggregate Log Change Notices: Not Supported 00:26:26.116 Normal NVM Subsystem Shutdown event: Not Supported 00:26:26.116 Zone Descriptor Change Notices: Not Supported 00:26:26.116 Discovery Log Change Notices: Supported 00:26:26.116 Controller Attributes 00:26:26.116 128-bit Host Identifier: Not Supported 00:26:26.116 Non-Operational Permissive Mode: Not Supported 00:26:26.116 NVM Sets: Not Supported 00:26:26.116 Read Recovery Levels: Not Supported 00:26:26.116 Endurance Groups: Not Supported 00:26:26.116 Predictable Latency Mode: Not Supported 00:26:26.116 Traffic Based Keep ALive: Not Supported 00:26:26.116 Namespace Granularity: Not Supported 00:26:26.116 SQ Associations: Not Supported 00:26:26.116 UUID List: Not Supported 00:26:26.116 Multi-Domain Subsystem: Not Supported 00:26:26.116 Fixed Capacity Management: Not Supported 00:26:26.116 Variable Capacity Management: Not Supported 00:26:26.116 Delete Endurance Group: Not Supported 00:26:26.116 Delete NVM Set: Not Supported 00:26:26.116 Extended LBA Formats Supported: Not Supported 00:26:26.116 Flexible Data Placement Supported: Not Supported 00:26:26.116 00:26:26.116 Controller Memory Buffer Support 00:26:26.116 ================================ 00:26:26.116 Supported: No 00:26:26.117 00:26:26.117 Persistent Memory Region Support 00:26:26.117 ================================ 00:26:26.117 Supported: No 00:26:26.117 00:26:26.117 Admin Command Set Attributes 00:26:26.117 ============================ 00:26:26.117 Security Send/Receive: Not Supported 00:26:26.117 Format NVM: Not Supported 00:26:26.117 Firmware Activate/Download: Not Supported 00:26:26.117 Namespace Management: Not Supported 00:26:26.117 Device Self-Test: Not Supported 00:26:26.117 Directives: Not Supported 00:26:26.117 NVMe-MI: Not Supported 00:26:26.117 Virtualization Management: Not Supported 00:26:26.117 Doorbell Buffer Config: Not Supported 00:26:26.117 Get LBA Status Capability: Not Supported 00:26:26.117 Command & Feature Lockdown Capability: Not Supported 00:26:26.117 Abort Command Limit: 1 00:26:26.117 Async Event Request Limit: 4 00:26:26.117 Number of Firmware Slots: N/A 00:26:26.117 Firmware Slot 1 Read-Only: N/A 00:26:26.117 Firm[2024-04-26 15:44:56.327176] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.117 [2024-04-26 15:44:56.327202] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.117 [2024-04-26 15:44:56.327208] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.117 [2024-04-26 15:44:56.327213] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd9ef40) on tqpair=0xd56300 00:26:26.117 ware Activation Without Reset: N/A 00:26:26.117 Multiple Update Detection Support: N/A 00:26:26.117 Firmware Update Granularity: No Information Provided 00:26:26.117 Per-Namespace SMART Log: No 00:26:26.117 Asymmetric Namespace Access Log Page: Not Supported 00:26:26.117 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:26:26.117 Command Effects Log Page: Not Supported 00:26:26.117 Get Log Page Extended Data: Supported 00:26:26.117 Telemetry Log Pages: Not Supported 00:26:26.117 Persistent Event Log Pages: Not Supported 00:26:26.117 Supported Log Pages Log Page: May Support 00:26:26.117 Commands Supported & Effects Log Page: Not Supported 00:26:26.117 Feature Identifiers & Effects Log Page:May Support 00:26:26.117 NVMe-MI Commands & Effects Log Page: May Support 00:26:26.117 Data Area 4 for Telemetry Log: Not Supported 00:26:26.117 Error Log Page Entries Supported: 128 00:26:26.117 Keep Alive: Not Supported 00:26:26.117 00:26:26.117 NVM Command Set Attributes 00:26:26.117 ========================== 00:26:26.117 Submission Queue Entry Size 00:26:26.117 Max: 1 00:26:26.117 Min: 1 00:26:26.117 Completion Queue Entry Size 00:26:26.117 Max: 1 00:26:26.117 Min: 1 00:26:26.117 Number of Namespaces: 0 00:26:26.117 Compare Command: Not Supported 00:26:26.117 Write Uncorrectable Command: Not Supported 00:26:26.117 Dataset Management Command: Not Supported 00:26:26.117 Write Zeroes Command: Not Supported 00:26:26.117 Set Features Save Field: Not Supported 00:26:26.117 Reservations: Not Supported 00:26:26.117 Timestamp: Not Supported 00:26:26.117 Copy: Not Supported 00:26:26.117 Volatile Write Cache: Not Present 00:26:26.117 Atomic Write Unit (Normal): 1 00:26:26.117 Atomic Write Unit (PFail): 1 00:26:26.117 Atomic Compare & Write Unit: 1 00:26:26.117 Fused Compare & Write: Supported 00:26:26.117 Scatter-Gather List 00:26:26.117 SGL Command Set: Supported 00:26:26.117 SGL Keyed: Supported 00:26:26.117 SGL Bit Bucket Descriptor: Not Supported 00:26:26.117 SGL Metadata Pointer: Not Supported 00:26:26.117 Oversized SGL: Not Supported 00:26:26.117 SGL Metadata Address: Not Supported 00:26:26.117 SGL Offset: Supported 00:26:26.117 Transport SGL Data Block: Not Supported 00:26:26.117 Replay Protected Memory Block: Not Supported 00:26:26.117 00:26:26.117 Firmware Slot Information 00:26:26.117 ========================= 00:26:26.117 Active slot: 0 00:26:26.117 00:26:26.117 00:26:26.117 Error Log 00:26:26.117 ========= 00:26:26.117 00:26:26.117 Active Namespaces 00:26:26.117 ================= 00:26:26.117 Discovery Log Page 00:26:26.117 ================== 00:26:26.117 Generation Counter: 2 00:26:26.117 Number of Records: 2 00:26:26.117 Record Format: 0 00:26:26.117 00:26:26.117 Discovery Log Entry 0 00:26:26.117 ---------------------- 00:26:26.117 Transport Type: 3 (TCP) 00:26:26.117 Address Family: 1 (IPv4) 00:26:26.117 Subsystem Type: 3 (Current Discovery Subsystem) 00:26:26.117 Entry Flags: 00:26:26.117 Duplicate Returned Information: 1 00:26:26.117 Explicit Persistent Connection Support for Discovery: 1 00:26:26.117 Transport Requirements: 00:26:26.117 Secure Channel: Not Required 00:26:26.117 Port ID: 0 (0x0000) 00:26:26.117 Controller ID: 65535 (0xffff) 00:26:26.117 Admin Max SQ Size: 128 00:26:26.117 Transport Service Identifier: 4420 00:26:26.117 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:26:26.117 Transport Address: 10.0.0.2 00:26:26.117 Discovery Log Entry 1 00:26:26.117 ---------------------- 00:26:26.117 Transport Type: 3 (TCP) 00:26:26.117 Address Family: 1 (IPv4) 00:26:26.117 Subsystem Type: 2 (NVM Subsystem) 00:26:26.117 Entry Flags: 00:26:26.117 Duplicate Returned Information: 0 00:26:26.117 Explicit Persistent Connection Support for Discovery: 0 00:26:26.117 Transport Requirements: 00:26:26.117 Secure Channel: Not Required 00:26:26.117 Port ID: 0 (0x0000) 00:26:26.117 Controller ID: 65535 (0xffff) 00:26:26.117 Admin Max SQ Size: 128 00:26:26.117 Transport Service Identifier: 4420 00:26:26.117 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:26:26.117 Transport Address: 10.0.0.2 [2024-04-26 15:44:56.327328] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:26:26.117 [2024-04-26 15:44:56.327347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.117 [2024-04-26 15:44:56.327355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.117 [2024-04-26 15:44:56.327362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.117 [2024-04-26 15:44:56.327368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.117 [2024-04-26 15:44:56.327380] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.117 [2024-04-26 15:44:56.327384] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.117 [2024-04-26 15:44:56.327388] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd56300) 00:26:26.117 [2024-04-26 15:44:56.327398] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.117 [2024-04-26 15:44:56.327426] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd9ede0, cid 3, qid 0 00:26:26.117 [2024-04-26 15:44:56.327486] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.117 [2024-04-26 15:44:56.327493] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.117 [2024-04-26 15:44:56.327497] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.117 [2024-04-26 15:44:56.327501] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd9ede0) on tqpair=0xd56300 00:26:26.117 [2024-04-26 15:44:56.327515] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.117 [2024-04-26 15:44:56.327520] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.117 [2024-04-26 15:44:56.327524] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd56300) 00:26:26.117 [2024-04-26 15:44:56.327532] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.117 [2024-04-26 15:44:56.327558] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd9ede0, cid 3, qid 0 00:26:26.117 [2024-04-26 15:44:56.327637] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.117 [2024-04-26 15:44:56.327644] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.117 [2024-04-26 15:44:56.327648] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.117 [2024-04-26 15:44:56.327652] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd9ede0) on tqpair=0xd56300 00:26:26.117 [2024-04-26 15:44:56.327658] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:26:26.117 [2024-04-26 15:44:56.327663] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:26:26.117 [2024-04-26 15:44:56.327673] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.117 [2024-04-26 15:44:56.327678] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.117 [2024-04-26 15:44:56.327682] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd56300) 00:26:26.117 [2024-04-26 15:44:56.327690] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.117 [2024-04-26 15:44:56.327709] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd9ede0, cid 3, qid 0 00:26:26.117 [2024-04-26 15:44:56.327770] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.118 [2024-04-26 15:44:56.327777] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.118 [2024-04-26 15:44:56.327781] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.118 [2024-04-26 15:44:56.327785] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd9ede0) on tqpair=0xd56300 00:26:26.118 [2024-04-26 15:44:56.327796] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.118 [2024-04-26 15:44:56.327801] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.118 [2024-04-26 15:44:56.327805] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd56300) 00:26:26.118 [2024-04-26 15:44:56.327812] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.118 [2024-04-26 15:44:56.327830] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd9ede0, cid 3, qid 0 00:26:26.118 [2024-04-26 15:44:56.327890] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.118 [2024-04-26 15:44:56.327903] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.118 [2024-04-26 15:44:56.327907] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.118 [2024-04-26 15:44:56.327912] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd9ede0) on tqpair=0xd56300 00:26:26.118 [2024-04-26 15:44:56.327923] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.118 [2024-04-26 15:44:56.327928] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.118 [2024-04-26 15:44:56.327932] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd56300) 00:26:26.118 [2024-04-26 15:44:56.327940] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.118 [2024-04-26 15:44:56.327959] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd9ede0, cid 3, qid 0 00:26:26.118 [2024-04-26 15:44:56.328019] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.118 [2024-04-26 15:44:56.328026] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.118 [2024-04-26 15:44:56.328030] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.118 [2024-04-26 15:44:56.328034] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd9ede0) on tqpair=0xd56300 00:26:26.118 [2024-04-26 15:44:56.328045] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.118 [2024-04-26 15:44:56.328049] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.118 [2024-04-26 15:44:56.328053] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd56300) 00:26:26.118 [2024-04-26 15:44:56.328061] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.118 [2024-04-26 15:44:56.328078] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd9ede0, cid 3, qid 0 00:26:26.118 [2024-04-26 15:44:56.328159] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.118 [2024-04-26 15:44:56.328168] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.118 [2024-04-26 15:44:56.328172] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.118 [2024-04-26 15:44:56.328177] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd9ede0) on tqpair=0xd56300 00:26:26.118 [2024-04-26 15:44:56.328189] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.118 [2024-04-26 15:44:56.328194] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.118 [2024-04-26 15:44:56.328198] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd56300) 00:26:26.118 [2024-04-26 15:44:56.328206] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.118 [2024-04-26 15:44:56.328227] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd9ede0, cid 3, qid 0 00:26:26.118 [2024-04-26 15:44:56.328291] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.118 [2024-04-26 15:44:56.328298] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.118 [2024-04-26 15:44:56.328302] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.118 [2024-04-26 15:44:56.328306] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd9ede0) on tqpair=0xd56300 00:26:26.118 [2024-04-26 15:44:56.328316] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.118 [2024-04-26 15:44:56.328321] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.118 [2024-04-26 15:44:56.328325] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd56300) 00:26:26.118 [2024-04-26 15:44:56.328332] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.118 [2024-04-26 15:44:56.328362] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd9ede0, cid 3, qid 0 00:26:26.118 [2024-04-26 15:44:56.328439] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.118 [2024-04-26 15:44:56.328450] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.118 [2024-04-26 15:44:56.328454] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.118 [2024-04-26 15:44:56.328458] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd9ede0) on tqpair=0xd56300 00:26:26.118 [2024-04-26 15:44:56.328470] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.118 [2024-04-26 15:44:56.328475] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.118 [2024-04-26 15:44:56.328479] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd56300) 00:26:26.118 [2024-04-26 15:44:56.328486] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.118 [2024-04-26 15:44:56.328506] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd9ede0, cid 3, qid 0 00:26:26.118 [2024-04-26 15:44:56.328564] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.118 [2024-04-26 15:44:56.328571] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.118 [2024-04-26 15:44:56.328575] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.118 [2024-04-26 15:44:56.328579] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd9ede0) on tqpair=0xd56300 00:26:26.118 [2024-04-26 15:44:56.328590] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.118 [2024-04-26 15:44:56.328595] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.118 [2024-04-26 15:44:56.328598] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd56300) 00:26:26.118 [2024-04-26 15:44:56.328606] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.118 [2024-04-26 15:44:56.328625] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd9ede0, cid 3, qid 0 00:26:26.118 [2024-04-26 15:44:56.328683] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.118 [2024-04-26 15:44:56.328690] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.118 [2024-04-26 15:44:56.328694] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.118 [2024-04-26 15:44:56.328698] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd9ede0) on tqpair=0xd56300 00:26:26.118 [2024-04-26 15:44:56.328708] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.118 [2024-04-26 15:44:56.328713] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.118 [2024-04-26 15:44:56.328717] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd56300) 00:26:26.118 [2024-04-26 15:44:56.328724] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.118 [2024-04-26 15:44:56.328742] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd9ede0, cid 3, qid 0 00:26:26.118 [2024-04-26 15:44:56.328799] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.118 [2024-04-26 15:44:56.328806] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.118 [2024-04-26 15:44:56.328810] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.118 [2024-04-26 15:44:56.328814] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd9ede0) on tqpair=0xd56300 00:26:26.118 [2024-04-26 15:44:56.328824] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.118 [2024-04-26 15:44:56.328829] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.118 [2024-04-26 15:44:56.328833] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd56300) 00:26:26.118 [2024-04-26 15:44:56.328840] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.118 [2024-04-26 15:44:56.328858] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd9ede0, cid 3, qid 0 00:26:26.118 [2024-04-26 15:44:56.328920] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.118 [2024-04-26 15:44:56.328927] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.118 [2024-04-26 15:44:56.328931] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.118 [2024-04-26 15:44:56.328935] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd9ede0) on tqpair=0xd56300 00:26:26.118 [2024-04-26 15:44:56.328946] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.118 [2024-04-26 15:44:56.328950] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.118 [2024-04-26 15:44:56.328954] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd56300) 00:26:26.118 [2024-04-26 15:44:56.328962] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.118 [2024-04-26 15:44:56.328980] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd9ede0, cid 3, qid 0 00:26:26.118 [2024-04-26 15:44:56.329041] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.118 [2024-04-26 15:44:56.329050] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.118 [2024-04-26 15:44:56.329054] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.118 [2024-04-26 15:44:56.329058] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd9ede0) on tqpair=0xd56300 00:26:26.118 [2024-04-26 15:44:56.329069] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.118 [2024-04-26 15:44:56.329074] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.118 [2024-04-26 15:44:56.329078] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd56300) 00:26:26.118 [2024-04-26 15:44:56.329085] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.118 [2024-04-26 15:44:56.329103] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd9ede0, cid 3, qid 0 00:26:26.118 [2024-04-26 15:44:56.329177] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.118 [2024-04-26 15:44:56.329196] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.118 [2024-04-26 15:44:56.329201] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.118 [2024-04-26 15:44:56.329206] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd9ede0) on tqpair=0xd56300 00:26:26.119 [2024-04-26 15:44:56.329217] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.119 [2024-04-26 15:44:56.329222] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.119 [2024-04-26 15:44:56.329226] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd56300) 00:26:26.119 [2024-04-26 15:44:56.329234] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.119 [2024-04-26 15:44:56.329256] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd9ede0, cid 3, qid 0 00:26:26.119 [2024-04-26 15:44:56.329311] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.119 [2024-04-26 15:44:56.329318] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.119 [2024-04-26 15:44:56.329322] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.119 [2024-04-26 15:44:56.329326] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd9ede0) on tqpair=0xd56300 00:26:26.119 [2024-04-26 15:44:56.329337] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.119 [2024-04-26 15:44:56.329341] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.119 [2024-04-26 15:44:56.329346] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd56300) 00:26:26.119 [2024-04-26 15:44:56.329353] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.119 [2024-04-26 15:44:56.329371] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd9ede0, cid 3, qid 0 00:26:26.119 [2024-04-26 15:44:56.329430] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.119 [2024-04-26 15:44:56.329438] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.119 [2024-04-26 15:44:56.329442] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.119 [2024-04-26 15:44:56.329446] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd9ede0) on tqpair=0xd56300 00:26:26.119 [2024-04-26 15:44:56.329457] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.119 [2024-04-26 15:44:56.329462] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.119 [2024-04-26 15:44:56.329466] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd56300) 00:26:26.119 [2024-04-26 15:44:56.329474] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.119 [2024-04-26 15:44:56.329492] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd9ede0, cid 3, qid 0 00:26:26.119 [2024-04-26 15:44:56.329548] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.119 [2024-04-26 15:44:56.329555] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.119 [2024-04-26 15:44:56.329559] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.119 [2024-04-26 15:44:56.329563] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd9ede0) on tqpair=0xd56300 00:26:26.119 [2024-04-26 15:44:56.329574] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.119 [2024-04-26 15:44:56.329578] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.119 [2024-04-26 15:44:56.329582] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd56300) 00:26:26.119 [2024-04-26 15:44:56.329590] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.119 [2024-04-26 15:44:56.329608] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd9ede0, cid 3, qid 0 00:26:26.119 [2024-04-26 15:44:56.329664] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.119 [2024-04-26 15:44:56.329671] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.119 [2024-04-26 15:44:56.329674] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.119 [2024-04-26 15:44:56.329678] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd9ede0) on tqpair=0xd56300 00:26:26.119 [2024-04-26 15:44:56.329689] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.119 [2024-04-26 15:44:56.329694] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.119 [2024-04-26 15:44:56.329697] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd56300) 00:26:26.119 [2024-04-26 15:44:56.329705] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.119 [2024-04-26 15:44:56.329723] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd9ede0, cid 3, qid 0 00:26:26.119 [2024-04-26 15:44:56.329781] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.119 [2024-04-26 15:44:56.329788] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.119 [2024-04-26 15:44:56.329792] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.119 [2024-04-26 15:44:56.329796] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd9ede0) on tqpair=0xd56300 00:26:26.119 [2024-04-26 15:44:56.329807] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.119 [2024-04-26 15:44:56.329811] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.119 [2024-04-26 15:44:56.329815] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd56300) 00:26:26.119 [2024-04-26 15:44:56.329823] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.119 [2024-04-26 15:44:56.329841] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd9ede0, cid 3, qid 0 00:26:26.119 [2024-04-26 15:44:56.329901] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.119 [2024-04-26 15:44:56.329908] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.119 [2024-04-26 15:44:56.329912] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.119 [2024-04-26 15:44:56.329917] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd9ede0) on tqpair=0xd56300 00:26:26.119 [2024-04-26 15:44:56.329927] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.119 [2024-04-26 15:44:56.329932] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.119 [2024-04-26 15:44:56.329936] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd56300) 00:26:26.119 [2024-04-26 15:44:56.329944] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.119 [2024-04-26 15:44:56.329962] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd9ede0, cid 3, qid 0 00:26:26.119 [2024-04-26 15:44:56.330018] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.119 [2024-04-26 15:44:56.330026] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.119 [2024-04-26 15:44:56.330030] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.119 [2024-04-26 15:44:56.330034] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd9ede0) on tqpair=0xd56300 00:26:26.119 [2024-04-26 15:44:56.330045] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.119 [2024-04-26 15:44:56.330050] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.119 [2024-04-26 15:44:56.330054] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd56300) 00:26:26.119 [2024-04-26 15:44:56.330061] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.119 [2024-04-26 15:44:56.330079] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd9ede0, cid 3, qid 0 00:26:26.119 [2024-04-26 15:44:56.330146] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.119 [2024-04-26 15:44:56.330155] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.119 [2024-04-26 15:44:56.330158] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.119 [2024-04-26 15:44:56.330163] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd9ede0) on tqpair=0xd56300 00:26:26.119 [2024-04-26 15:44:56.330174] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.119 [2024-04-26 15:44:56.330179] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.119 [2024-04-26 15:44:56.330183] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd56300) 00:26:26.119 [2024-04-26 15:44:56.330191] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.119 [2024-04-26 15:44:56.330211] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd9ede0, cid 3, qid 0 00:26:26.119 [2024-04-26 15:44:56.330272] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.119 [2024-04-26 15:44:56.330279] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.119 [2024-04-26 15:44:56.330283] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.119 [2024-04-26 15:44:56.330287] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd9ede0) on tqpair=0xd56300 00:26:26.119 [2024-04-26 15:44:56.330298] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.119 [2024-04-26 15:44:56.330303] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.119 [2024-04-26 15:44:56.330307] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd56300) 00:26:26.119 [2024-04-26 15:44:56.330314] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.119 [2024-04-26 15:44:56.330332] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd9ede0, cid 3, qid 0 00:26:26.119 [2024-04-26 15:44:56.330388] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.119 [2024-04-26 15:44:56.330395] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.119 [2024-04-26 15:44:56.330399] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.120 [2024-04-26 15:44:56.330403] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd9ede0) on tqpair=0xd56300 00:26:26.120 [2024-04-26 15:44:56.330414] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.120 [2024-04-26 15:44:56.330419] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.120 [2024-04-26 15:44:56.330423] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd56300) 00:26:26.120 [2024-04-26 15:44:56.330430] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.120 [2024-04-26 15:44:56.330448] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd9ede0, cid 3, qid 0 00:26:26.120 [2024-04-26 15:44:56.330508] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.120 [2024-04-26 15:44:56.330516] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.120 [2024-04-26 15:44:56.330519] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.120 [2024-04-26 15:44:56.330523] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd9ede0) on tqpair=0xd56300 00:26:26.120 [2024-04-26 15:44:56.330534] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.120 [2024-04-26 15:44:56.330538] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.120 [2024-04-26 15:44:56.330542] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd56300) 00:26:26.120 [2024-04-26 15:44:56.330550] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.120 [2024-04-26 15:44:56.330568] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd9ede0, cid 3, qid 0 00:26:26.120 [2024-04-26 15:44:56.330626] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.120 [2024-04-26 15:44:56.330633] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.120 [2024-04-26 15:44:56.330637] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.120 [2024-04-26 15:44:56.330641] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd9ede0) on tqpair=0xd56300 00:26:26.120 [2024-04-26 15:44:56.330652] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.120 [2024-04-26 15:44:56.330656] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.120 [2024-04-26 15:44:56.330660] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd56300) 00:26:26.120 [2024-04-26 15:44:56.330668] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.120 [2024-04-26 15:44:56.330685] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd9ede0, cid 3, qid 0 00:26:26.120 [2024-04-26 15:44:56.330744] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.120 [2024-04-26 15:44:56.330751] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.120 [2024-04-26 15:44:56.330755] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.120 [2024-04-26 15:44:56.330759] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd9ede0) on tqpair=0xd56300 00:26:26.120 [2024-04-26 15:44:56.330769] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.120 [2024-04-26 15:44:56.330774] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.120 [2024-04-26 15:44:56.330778] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd56300) 00:26:26.120 [2024-04-26 15:44:56.330785] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.120 [2024-04-26 15:44:56.330803] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd9ede0, cid 3, qid 0 00:26:26.120 [2024-04-26 15:44:56.330859] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.120 [2024-04-26 15:44:56.330866] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.120 [2024-04-26 15:44:56.330870] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.120 [2024-04-26 15:44:56.330874] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd9ede0) on tqpair=0xd56300 00:26:26.120 [2024-04-26 15:44:56.330884] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.120 [2024-04-26 15:44:56.330889] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.120 [2024-04-26 15:44:56.330893] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd56300) 00:26:26.120 [2024-04-26 15:44:56.330901] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.120 [2024-04-26 15:44:56.330919] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd9ede0, cid 3, qid 0 00:26:26.120 [2024-04-26 15:44:56.330974] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.120 [2024-04-26 15:44:56.330982] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.120 [2024-04-26 15:44:56.330986] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.120 [2024-04-26 15:44:56.330990] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd9ede0) on tqpair=0xd56300 00:26:26.120 [2024-04-26 15:44:56.331001] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.120 [2024-04-26 15:44:56.331005] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.120 [2024-04-26 15:44:56.331009] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd56300) 00:26:26.120 [2024-04-26 15:44:56.331017] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.120 [2024-04-26 15:44:56.331036] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd9ede0, cid 3, qid 0 00:26:26.120 [2024-04-26 15:44:56.331091] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.120 [2024-04-26 15:44:56.331098] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.120 [2024-04-26 15:44:56.331102] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.120 [2024-04-26 15:44:56.331106] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd9ede0) on tqpair=0xd56300 00:26:26.120 [2024-04-26 15:44:56.331116] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.120 [2024-04-26 15:44:56.331121] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.120 [2024-04-26 15:44:56.331124] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd56300) 00:26:26.120 [2024-04-26 15:44:56.331132] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.120 [2024-04-26 15:44:56.331175] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd9ede0, cid 3, qid 0 00:26:26.120 [2024-04-26 15:44:56.331233] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.120 [2024-04-26 15:44:56.331241] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.120 [2024-04-26 15:44:56.331244] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.120 [2024-04-26 15:44:56.331248] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd9ede0) on tqpair=0xd56300 00:26:26.120 [2024-04-26 15:44:56.331259] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.120 [2024-04-26 15:44:56.331264] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.120 [2024-04-26 15:44:56.331268] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd56300) 00:26:26.120 [2024-04-26 15:44:56.331276] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.120 [2024-04-26 15:44:56.331295] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd9ede0, cid 3, qid 0 00:26:26.120 [2024-04-26 15:44:56.331352] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.120 [2024-04-26 15:44:56.331359] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.120 [2024-04-26 15:44:56.331363] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.120 [2024-04-26 15:44:56.331367] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd9ede0) on tqpair=0xd56300 00:26:26.120 [2024-04-26 15:44:56.331377] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.120 [2024-04-26 15:44:56.331382] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.120 [2024-04-26 15:44:56.331386] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd56300) 00:26:26.120 [2024-04-26 15:44:56.331393] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.120 [2024-04-26 15:44:56.331411] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd9ede0, cid 3, qid 0 00:26:26.120 [2024-04-26 15:44:56.331470] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.120 [2024-04-26 15:44:56.331477] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.120 [2024-04-26 15:44:56.331481] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.120 [2024-04-26 15:44:56.331485] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd9ede0) on tqpair=0xd56300 00:26:26.120 [2024-04-26 15:44:56.331495] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.120 [2024-04-26 15:44:56.331500] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.120 [2024-04-26 15:44:56.331504] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd56300) 00:26:26.120 [2024-04-26 15:44:56.331511] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.120 [2024-04-26 15:44:56.331529] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd9ede0, cid 3, qid 0 00:26:26.120 [2024-04-26 15:44:56.331595] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.120 [2024-04-26 15:44:56.331602] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.120 [2024-04-26 15:44:56.331606] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.120 [2024-04-26 15:44:56.331610] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd9ede0) on tqpair=0xd56300 00:26:26.120 [2024-04-26 15:44:56.331620] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.120 [2024-04-26 15:44:56.331625] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.120 [2024-04-26 15:44:56.331629] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd56300) 00:26:26.120 [2024-04-26 15:44:56.331636] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.120 [2024-04-26 15:44:56.331655] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd9ede0, cid 3, qid 0 00:26:26.120 [2024-04-26 15:44:56.331711] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.120 [2024-04-26 15:44:56.331718] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.120 [2024-04-26 15:44:56.331722] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.120 [2024-04-26 15:44:56.331726] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd9ede0) on tqpair=0xd56300 00:26:26.120 [2024-04-26 15:44:56.331736] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.120 [2024-04-26 15:44:56.331741] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.120 [2024-04-26 15:44:56.331745] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd56300) 00:26:26.120 [2024-04-26 15:44:56.331752] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.121 [2024-04-26 15:44:56.331770] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd9ede0, cid 3, qid 0 00:26:26.121 [2024-04-26 15:44:56.331830] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.121 [2024-04-26 15:44:56.331837] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.121 [2024-04-26 15:44:56.331840] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.121 [2024-04-26 15:44:56.331844] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd9ede0) on tqpair=0xd56300 00:26:26.121 [2024-04-26 15:44:56.331855] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.121 [2024-04-26 15:44:56.331860] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.121 [2024-04-26 15:44:56.331864] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd56300) 00:26:26.121 [2024-04-26 15:44:56.331871] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.121 [2024-04-26 15:44:56.331889] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd9ede0, cid 3, qid 0 00:26:26.121 [2024-04-26 15:44:56.331947] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.121 [2024-04-26 15:44:56.331963] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.121 [2024-04-26 15:44:56.331968] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.121 [2024-04-26 15:44:56.331972] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd9ede0) on tqpair=0xd56300 00:26:26.121 [2024-04-26 15:44:56.331984] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.121 [2024-04-26 15:44:56.331989] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.121 [2024-04-26 15:44:56.331993] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd56300) 00:26:26.121 [2024-04-26 15:44:56.332001] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.121 [2024-04-26 15:44:56.332021] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd9ede0, cid 3, qid 0 00:26:26.121 [2024-04-26 15:44:56.332078] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.121 [2024-04-26 15:44:56.332094] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.121 [2024-04-26 15:44:56.332098] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.121 [2024-04-26 15:44:56.332103] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd9ede0) on tqpair=0xd56300 00:26:26.121 [2024-04-26 15:44:56.332114] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.121 [2024-04-26 15:44:56.332119] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.121 [2024-04-26 15:44:56.332123] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd56300) 00:26:26.121 [2024-04-26 15:44:56.332131] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.121 [2024-04-26 15:44:56.332165] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd9ede0, cid 3, qid 0 00:26:26.121 [2024-04-26 15:44:56.332221] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.121 [2024-04-26 15:44:56.332232] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.121 [2024-04-26 15:44:56.332237] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.121 [2024-04-26 15:44:56.332241] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd9ede0) on tqpair=0xd56300 00:26:26.121 [2024-04-26 15:44:56.332252] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.121 [2024-04-26 15:44:56.332257] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.121 [2024-04-26 15:44:56.332261] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd56300) 00:26:26.121 [2024-04-26 15:44:56.332269] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.121 [2024-04-26 15:44:56.332288] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd9ede0, cid 3, qid 0 00:26:26.121 [2024-04-26 15:44:56.332357] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.121 [2024-04-26 15:44:56.332375] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.121 [2024-04-26 15:44:56.332380] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.121 [2024-04-26 15:44:56.332384] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd9ede0) on tqpair=0xd56300 00:26:26.121 [2024-04-26 15:44:56.332396] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.121 [2024-04-26 15:44:56.332401] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.121 [2024-04-26 15:44:56.332405] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd56300) 00:26:26.121 [2024-04-26 15:44:56.332413] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.121 [2024-04-26 15:44:56.332433] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd9ede0, cid 3, qid 0 00:26:26.121 [2024-04-26 15:44:56.332493] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.121 [2024-04-26 15:44:56.332500] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.121 [2024-04-26 15:44:56.332504] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.121 [2024-04-26 15:44:56.332508] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd9ede0) on tqpair=0xd56300 00:26:26.121 [2024-04-26 15:44:56.332519] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.121 [2024-04-26 15:44:56.332523] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.121 [2024-04-26 15:44:56.332527] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd56300) 00:26:26.121 [2024-04-26 15:44:56.332535] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.121 [2024-04-26 15:44:56.332553] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd9ede0, cid 3, qid 0 00:26:26.121 [2024-04-26 15:44:56.332608] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.121 [2024-04-26 15:44:56.332620] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.121 [2024-04-26 15:44:56.332624] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.121 [2024-04-26 15:44:56.332628] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd9ede0) on tqpair=0xd56300 00:26:26.121 [2024-04-26 15:44:56.332639] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.121 [2024-04-26 15:44:56.332644] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.121 [2024-04-26 15:44:56.332648] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd56300) 00:26:26.121 [2024-04-26 15:44:56.332656] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.121 [2024-04-26 15:44:56.332675] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd9ede0, cid 3, qid 0 00:26:26.121 [2024-04-26 15:44:56.332734] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.121 [2024-04-26 15:44:56.332741] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.121 [2024-04-26 15:44:56.332745] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.121 [2024-04-26 15:44:56.332749] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd9ede0) on tqpair=0xd56300 00:26:26.121 [2024-04-26 15:44:56.332759] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.121 [2024-04-26 15:44:56.332764] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.121 [2024-04-26 15:44:56.332768] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd56300) 00:26:26.121 [2024-04-26 15:44:56.332775] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.121 [2024-04-26 15:44:56.332802] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd9ede0, cid 3, qid 0 00:26:26.121 [2024-04-26 15:44:56.332860] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.121 [2024-04-26 15:44:56.332867] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.121 [2024-04-26 15:44:56.332871] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.121 [2024-04-26 15:44:56.332875] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd9ede0) on tqpair=0xd56300 00:26:26.121 [2024-04-26 15:44:56.332886] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.121 [2024-04-26 15:44:56.332890] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.121 [2024-04-26 15:44:56.332894] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd56300) 00:26:26.121 [2024-04-26 15:44:56.332902] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.121 [2024-04-26 15:44:56.332919] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd9ede0, cid 3, qid 0 00:26:26.121 [2024-04-26 15:44:56.332977] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.121 [2024-04-26 15:44:56.332984] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.121 [2024-04-26 15:44:56.332988] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.121 [2024-04-26 15:44:56.332992] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd9ede0) on tqpair=0xd56300 00:26:26.121 [2024-04-26 15:44:56.333003] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.121 [2024-04-26 15:44:56.333007] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.121 [2024-04-26 15:44:56.333011] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd56300) 00:26:26.121 [2024-04-26 15:44:56.333019] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.121 [2024-04-26 15:44:56.333036] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd9ede0, cid 3, qid 0 00:26:26.121 [2024-04-26 15:44:56.333098] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.121 [2024-04-26 15:44:56.333105] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.121 [2024-04-26 15:44:56.333109] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.121 [2024-04-26 15:44:56.333113] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd9ede0) on tqpair=0xd56300 00:26:26.121 [2024-04-26 15:44:56.333123] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.121 [2024-04-26 15:44:56.333128] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.121 [2024-04-26 15:44:56.333132] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xd56300) 00:26:26.121 [2024-04-26 15:44:56.337160] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.121 [2024-04-26 15:44:56.337194] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xd9ede0, cid 3, qid 0 00:26:26.121 [2024-04-26 15:44:56.337257] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.121 [2024-04-26 15:44:56.337265] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.121 [2024-04-26 15:44:56.337269] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.121 [2024-04-26 15:44:56.337273] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0xd9ede0) on tqpair=0xd56300 00:26:26.121 [2024-04-26 15:44:56.337283] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 9 milliseconds 00:26:26.122 00:26:26.122 15:44:56 -- host/identify.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:26:26.122 [2024-04-26 15:44:56.372513] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:26:26.122 [2024-04-26 15:44:56.372556] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80404 ] 00:26:26.382 [2024-04-26 15:44:56.507274] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:26:26.382 [2024-04-26 15:44:56.507345] nvme_tcp.c:2326:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:26:26.382 [2024-04-26 15:44:56.507353] nvme_tcp.c:2330:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:26:26.382 [2024-04-26 15:44:56.507365] nvme_tcp.c:2348:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:26:26.382 [2024-04-26 15:44:56.507378] sock.c: 336:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:26:26.382 [2024-04-26 15:44:56.507520] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:26:26.382 [2024-04-26 15:44:56.507569] nvme_tcp.c:1543:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x13bc300 0 00:26:26.382 [2024-04-26 15:44:56.520156] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:26:26.382 [2024-04-26 15:44:56.520184] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:26:26.382 [2024-04-26 15:44:56.520190] nvme_tcp.c:1589:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:26:26.382 [2024-04-26 15:44:56.520194] nvme_tcp.c:1590:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:26:26.382 [2024-04-26 15:44:56.520245] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.382 [2024-04-26 15:44:56.520253] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.382 [2024-04-26 15:44:56.520258] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13bc300) 00:26:26.382 [2024-04-26 15:44:56.520273] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:26:26.382 [2024-04-26 15:44:56.520305] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14049c0, cid 0, qid 0 00:26:26.382 [2024-04-26 15:44:56.528163] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.382 [2024-04-26 15:44:56.528190] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.382 [2024-04-26 15:44:56.528196] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.382 [2024-04-26 15:44:56.528204] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14049c0) on tqpair=0x13bc300 00:26:26.382 [2024-04-26 15:44:56.528227] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:26:26.382 [2024-04-26 15:44:56.528237] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:26:26.382 [2024-04-26 15:44:56.528244] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:26:26.382 [2024-04-26 15:44:56.528264] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.382 [2024-04-26 15:44:56.528270] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.382 [2024-04-26 15:44:56.528274] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13bc300) 00:26:26.382 [2024-04-26 15:44:56.528284] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.382 [2024-04-26 15:44:56.528317] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14049c0, cid 0, qid 0 00:26:26.382 [2024-04-26 15:44:56.528397] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.382 [2024-04-26 15:44:56.528406] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.382 [2024-04-26 15:44:56.528410] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.382 [2024-04-26 15:44:56.528415] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14049c0) on tqpair=0x13bc300 00:26:26.382 [2024-04-26 15:44:56.528426] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:26:26.382 [2024-04-26 15:44:56.528435] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:26:26.382 [2024-04-26 15:44:56.528443] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.382 [2024-04-26 15:44:56.528448] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.382 [2024-04-26 15:44:56.528452] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13bc300) 00:26:26.382 [2024-04-26 15:44:56.528460] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.382 [2024-04-26 15:44:56.528483] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14049c0, cid 0, qid 0 00:26:26.382 [2024-04-26 15:44:56.528544] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.382 [2024-04-26 15:44:56.528552] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.382 [2024-04-26 15:44:56.528555] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.383 [2024-04-26 15:44:56.528560] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14049c0) on tqpair=0x13bc300 00:26:26.383 [2024-04-26 15:44:56.528567] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:26:26.383 [2024-04-26 15:44:56.528586] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:26:26.383 [2024-04-26 15:44:56.528594] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.383 [2024-04-26 15:44:56.528598] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.383 [2024-04-26 15:44:56.528602] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13bc300) 00:26:26.383 [2024-04-26 15:44:56.528610] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.383 [2024-04-26 15:44:56.528629] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14049c0, cid 0, qid 0 00:26:26.383 [2024-04-26 15:44:56.528685] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.383 [2024-04-26 15:44:56.528691] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.383 [2024-04-26 15:44:56.528695] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.383 [2024-04-26 15:44:56.528700] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14049c0) on tqpair=0x13bc300 00:26:26.383 [2024-04-26 15:44:56.528707] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:26:26.383 [2024-04-26 15:44:56.528717] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.383 [2024-04-26 15:44:56.528722] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.383 [2024-04-26 15:44:56.528726] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13bc300) 00:26:26.383 [2024-04-26 15:44:56.528733] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.383 [2024-04-26 15:44:56.528751] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14049c0, cid 0, qid 0 00:26:26.383 [2024-04-26 15:44:56.528811] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.383 [2024-04-26 15:44:56.528819] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.383 [2024-04-26 15:44:56.528822] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.383 [2024-04-26 15:44:56.528826] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14049c0) on tqpair=0x13bc300 00:26:26.383 [2024-04-26 15:44:56.528833] nvme_ctrlr.c:3749:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:26:26.383 [2024-04-26 15:44:56.528838] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:26:26.383 [2024-04-26 15:44:56.528847] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:26:26.383 [2024-04-26 15:44:56.528953] nvme_ctrlr.c:3942:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:26:26.383 [2024-04-26 15:44:56.528967] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:26:26.383 [2024-04-26 15:44:56.528978] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.383 [2024-04-26 15:44:56.528983] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.383 [2024-04-26 15:44:56.528987] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13bc300) 00:26:26.383 [2024-04-26 15:44:56.528995] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.383 [2024-04-26 15:44:56.529015] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14049c0, cid 0, qid 0 00:26:26.383 [2024-04-26 15:44:56.529074] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.383 [2024-04-26 15:44:56.529086] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.383 [2024-04-26 15:44:56.529090] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.383 [2024-04-26 15:44:56.529094] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14049c0) on tqpair=0x13bc300 00:26:26.383 [2024-04-26 15:44:56.529101] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:26:26.383 [2024-04-26 15:44:56.529112] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.383 [2024-04-26 15:44:56.529117] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.383 [2024-04-26 15:44:56.529122] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13bc300) 00:26:26.383 [2024-04-26 15:44:56.529129] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.383 [2024-04-26 15:44:56.529162] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14049c0, cid 0, qid 0 00:26:26.383 [2024-04-26 15:44:56.529225] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.383 [2024-04-26 15:44:56.529237] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.383 [2024-04-26 15:44:56.529241] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.383 [2024-04-26 15:44:56.529246] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14049c0) on tqpair=0x13bc300 00:26:26.383 [2024-04-26 15:44:56.529252] nvme_ctrlr.c:3784:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:26:26.383 [2024-04-26 15:44:56.529258] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:26:26.383 [2024-04-26 15:44:56.529267] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:26:26.383 [2024-04-26 15:44:56.529279] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:26:26.383 [2024-04-26 15:44:56.529289] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.383 [2024-04-26 15:44:56.529294] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13bc300) 00:26:26.383 [2024-04-26 15:44:56.529303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.383 [2024-04-26 15:44:56.529326] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14049c0, cid 0, qid 0 00:26:26.383 [2024-04-26 15:44:56.529429] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:26.383 [2024-04-26 15:44:56.529441] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:26.383 [2024-04-26 15:44:56.529446] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:26.383 [2024-04-26 15:44:56.529450] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13bc300): datao=0, datal=4096, cccid=0 00:26:26.383 [2024-04-26 15:44:56.529455] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14049c0) on tqpair(0x13bc300): expected_datao=0, payload_size=4096 00:26:26.383 [2024-04-26 15:44:56.529461] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.383 [2024-04-26 15:44:56.529470] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:26.383 [2024-04-26 15:44:56.529475] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:26.383 [2024-04-26 15:44:56.529484] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.383 [2024-04-26 15:44:56.529491] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.383 [2024-04-26 15:44:56.529494] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.383 [2024-04-26 15:44:56.529498] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14049c0) on tqpair=0x13bc300 00:26:26.383 [2024-04-26 15:44:56.529509] nvme_ctrlr.c:1984:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:26:26.383 [2024-04-26 15:44:56.529514] nvme_ctrlr.c:1988:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:26:26.383 [2024-04-26 15:44:56.529519] nvme_ctrlr.c:1991:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:26:26.383 [2024-04-26 15:44:56.529528] nvme_ctrlr.c:2015:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:26:26.383 [2024-04-26 15:44:56.529534] nvme_ctrlr.c:2030:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:26:26.383 [2024-04-26 15:44:56.529540] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:26:26.383 [2024-04-26 15:44:56.529550] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:26:26.383 [2024-04-26 15:44:56.529558] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.383 [2024-04-26 15:44:56.529562] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.383 [2024-04-26 15:44:56.529566] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13bc300) 00:26:26.383 [2024-04-26 15:44:56.529575] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:26.383 [2024-04-26 15:44:56.529596] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14049c0, cid 0, qid 0 00:26:26.383 [2024-04-26 15:44:56.529672] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.383 [2024-04-26 15:44:56.529679] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.383 [2024-04-26 15:44:56.529683] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.383 [2024-04-26 15:44:56.529687] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14049c0) on tqpair=0x13bc300 00:26:26.383 [2024-04-26 15:44:56.529696] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.383 [2024-04-26 15:44:56.529701] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.383 [2024-04-26 15:44:56.529704] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x13bc300) 00:26:26.383 [2024-04-26 15:44:56.529711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:26.383 [2024-04-26 15:44:56.529718] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.383 [2024-04-26 15:44:56.529722] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.383 [2024-04-26 15:44:56.529726] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x13bc300) 00:26:26.383 [2024-04-26 15:44:56.529733] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:26.383 [2024-04-26 15:44:56.529739] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.383 [2024-04-26 15:44:56.529743] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.383 [2024-04-26 15:44:56.529747] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x13bc300) 00:26:26.383 [2024-04-26 15:44:56.529753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:26.383 [2024-04-26 15:44:56.529760] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.383 [2024-04-26 15:44:56.529771] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.384 [2024-04-26 15:44:56.529774] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13bc300) 00:26:26.384 [2024-04-26 15:44:56.529780] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:26.384 [2024-04-26 15:44:56.529786] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:26:26.384 [2024-04-26 15:44:56.529799] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:26:26.384 [2024-04-26 15:44:56.529808] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.384 [2024-04-26 15:44:56.529812] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13bc300) 00:26:26.384 [2024-04-26 15:44:56.529819] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.384 [2024-04-26 15:44:56.529840] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14049c0, cid 0, qid 0 00:26:26.384 [2024-04-26 15:44:56.529848] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1404b20, cid 1, qid 0 00:26:26.384 [2024-04-26 15:44:56.529853] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1404c80, cid 2, qid 0 00:26:26.384 [2024-04-26 15:44:56.529858] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1404de0, cid 3, qid 0 00:26:26.384 [2024-04-26 15:44:56.529863] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1404f40, cid 4, qid 0 00:26:26.384 [2024-04-26 15:44:56.529959] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.384 [2024-04-26 15:44:56.529966] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.384 [2024-04-26 15:44:56.529970] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.384 [2024-04-26 15:44:56.529974] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1404f40) on tqpair=0x13bc300 00:26:26.384 [2024-04-26 15:44:56.529981] nvme_ctrlr.c:2902:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:26:26.384 [2024-04-26 15:44:56.529987] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:26:26.384 [2024-04-26 15:44:56.529996] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:26:26.384 [2024-04-26 15:44:56.530003] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:26:26.384 [2024-04-26 15:44:56.530010] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.384 [2024-04-26 15:44:56.530015] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.384 [2024-04-26 15:44:56.530019] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13bc300) 00:26:26.384 [2024-04-26 15:44:56.530026] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:26:26.384 [2024-04-26 15:44:56.530045] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1404f40, cid 4, qid 0 00:26:26.384 [2024-04-26 15:44:56.530105] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.384 [2024-04-26 15:44:56.530112] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.384 [2024-04-26 15:44:56.530116] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.384 [2024-04-26 15:44:56.530120] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1404f40) on tqpair=0x13bc300 00:26:26.384 [2024-04-26 15:44:56.534187] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:26:26.384 [2024-04-26 15:44:56.534225] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:26:26.384 [2024-04-26 15:44:56.534241] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.384 [2024-04-26 15:44:56.534246] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13bc300) 00:26:26.384 [2024-04-26 15:44:56.534255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.384 [2024-04-26 15:44:56.534284] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1404f40, cid 4, qid 0 00:26:26.384 [2024-04-26 15:44:56.534366] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:26.384 [2024-04-26 15:44:56.534374] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:26.384 [2024-04-26 15:44:56.534378] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:26.384 [2024-04-26 15:44:56.534382] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13bc300): datao=0, datal=4096, cccid=4 00:26:26.384 [2024-04-26 15:44:56.534387] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1404f40) on tqpair(0x13bc300): expected_datao=0, payload_size=4096 00:26:26.384 [2024-04-26 15:44:56.534392] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.384 [2024-04-26 15:44:56.534400] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:26.384 [2024-04-26 15:44:56.534404] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:26.384 [2024-04-26 15:44:56.534413] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.384 [2024-04-26 15:44:56.534419] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.384 [2024-04-26 15:44:56.534423] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.384 [2024-04-26 15:44:56.534427] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1404f40) on tqpair=0x13bc300 00:26:26.384 [2024-04-26 15:44:56.534440] nvme_ctrlr.c:4557:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:26:26.384 [2024-04-26 15:44:56.534455] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:26:26.384 [2024-04-26 15:44:56.534466] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:26:26.384 [2024-04-26 15:44:56.534474] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.384 [2024-04-26 15:44:56.534479] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13bc300) 00:26:26.384 [2024-04-26 15:44:56.534487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.384 [2024-04-26 15:44:56.534509] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1404f40, cid 4, qid 0 00:26:26.384 [2024-04-26 15:44:56.534596] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:26.384 [2024-04-26 15:44:56.534603] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:26.384 [2024-04-26 15:44:56.534607] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:26.384 [2024-04-26 15:44:56.534611] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13bc300): datao=0, datal=4096, cccid=4 00:26:26.384 [2024-04-26 15:44:56.534616] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1404f40) on tqpair(0x13bc300): expected_datao=0, payload_size=4096 00:26:26.384 [2024-04-26 15:44:56.534621] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.384 [2024-04-26 15:44:56.534628] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:26.384 [2024-04-26 15:44:56.534633] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:26.384 [2024-04-26 15:44:56.534641] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.384 [2024-04-26 15:44:56.534648] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.384 [2024-04-26 15:44:56.534651] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.384 [2024-04-26 15:44:56.534655] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1404f40) on tqpair=0x13bc300 00:26:26.384 [2024-04-26 15:44:56.534673] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:26:26.384 [2024-04-26 15:44:56.534685] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:26:26.384 [2024-04-26 15:44:56.534694] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.384 [2024-04-26 15:44:56.534698] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13bc300) 00:26:26.384 [2024-04-26 15:44:56.534706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.384 [2024-04-26 15:44:56.534727] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1404f40, cid 4, qid 0 00:26:26.384 [2024-04-26 15:44:56.534803] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:26.384 [2024-04-26 15:44:56.534810] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:26.384 [2024-04-26 15:44:56.534813] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:26.384 [2024-04-26 15:44:56.534817] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13bc300): datao=0, datal=4096, cccid=4 00:26:26.384 [2024-04-26 15:44:56.534822] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1404f40) on tqpair(0x13bc300): expected_datao=0, payload_size=4096 00:26:26.384 [2024-04-26 15:44:56.534827] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.384 [2024-04-26 15:44:56.534834] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:26.384 [2024-04-26 15:44:56.534839] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:26.384 [2024-04-26 15:44:56.534847] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.384 [2024-04-26 15:44:56.534853] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.384 [2024-04-26 15:44:56.534857] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.384 [2024-04-26 15:44:56.534862] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1404f40) on tqpair=0x13bc300 00:26:26.384 [2024-04-26 15:44:56.534872] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:26:26.384 [2024-04-26 15:44:56.534881] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:26:26.384 [2024-04-26 15:44:56.534894] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:26:26.384 [2024-04-26 15:44:56.534902] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:26:26.384 [2024-04-26 15:44:56.534907] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:26:26.384 [2024-04-26 15:44:56.534913] nvme_ctrlr.c:2990:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:26:26.384 [2024-04-26 15:44:56.534918] nvme_ctrlr.c:1484:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:26:26.384 [2024-04-26 15:44:56.534924] nvme_ctrlr.c:1490:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:26:26.384 [2024-04-26 15:44:56.534940] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.384 [2024-04-26 15:44:56.534945] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13bc300) 00:26:26.384 [2024-04-26 15:44:56.534953] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.384 [2024-04-26 15:44:56.534961] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.385 [2024-04-26 15:44:56.534965] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.385 [2024-04-26 15:44:56.534969] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x13bc300) 00:26:26.385 [2024-04-26 15:44:56.534975] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:26:26.385 [2024-04-26 15:44:56.535001] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1404f40, cid 4, qid 0 00:26:26.385 [2024-04-26 15:44:56.535009] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14050a0, cid 5, qid 0 00:26:26.385 [2024-04-26 15:44:56.535089] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.385 [2024-04-26 15:44:56.535097] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.385 [2024-04-26 15:44:56.535100] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.385 [2024-04-26 15:44:56.535105] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1404f40) on tqpair=0x13bc300 00:26:26.385 [2024-04-26 15:44:56.535113] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.385 [2024-04-26 15:44:56.535119] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.385 [2024-04-26 15:44:56.535123] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.385 [2024-04-26 15:44:56.535127] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14050a0) on tqpair=0x13bc300 00:26:26.385 [2024-04-26 15:44:56.535161] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.385 [2024-04-26 15:44:56.535168] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x13bc300) 00:26:26.385 [2024-04-26 15:44:56.535176] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.385 [2024-04-26 15:44:56.535199] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14050a0, cid 5, qid 0 00:26:26.385 [2024-04-26 15:44:56.535266] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.385 [2024-04-26 15:44:56.535275] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.385 [2024-04-26 15:44:56.535279] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.385 [2024-04-26 15:44:56.535284] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14050a0) on tqpair=0x13bc300 00:26:26.385 [2024-04-26 15:44:56.535296] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.385 [2024-04-26 15:44:56.535301] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x13bc300) 00:26:26.385 [2024-04-26 15:44:56.535309] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.385 [2024-04-26 15:44:56.535330] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14050a0, cid 5, qid 0 00:26:26.385 [2024-04-26 15:44:56.535392] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.385 [2024-04-26 15:44:56.535400] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.385 [2024-04-26 15:44:56.535404] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.385 [2024-04-26 15:44:56.535408] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14050a0) on tqpair=0x13bc300 00:26:26.385 [2024-04-26 15:44:56.535420] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.385 [2024-04-26 15:44:56.535424] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x13bc300) 00:26:26.385 [2024-04-26 15:44:56.535432] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.385 [2024-04-26 15:44:56.535450] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14050a0, cid 5, qid 0 00:26:26.385 [2024-04-26 15:44:56.535507] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.385 [2024-04-26 15:44:56.535514] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.385 [2024-04-26 15:44:56.535518] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.385 [2024-04-26 15:44:56.535522] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14050a0) on tqpair=0x13bc300 00:26:26.385 [2024-04-26 15:44:56.535537] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.385 [2024-04-26 15:44:56.535543] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x13bc300) 00:26:26.385 [2024-04-26 15:44:56.535550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.385 [2024-04-26 15:44:56.535558] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.385 [2024-04-26 15:44:56.535562] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x13bc300) 00:26:26.385 [2024-04-26 15:44:56.535569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.385 [2024-04-26 15:44:56.535577] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.385 [2024-04-26 15:44:56.535581] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x13bc300) 00:26:26.385 [2024-04-26 15:44:56.535587] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.385 [2024-04-26 15:44:56.535596] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.385 [2024-04-26 15:44:56.535600] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x13bc300) 00:26:26.385 [2024-04-26 15:44:56.535607] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.385 [2024-04-26 15:44:56.535628] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x14050a0, cid 5, qid 0 00:26:26.385 [2024-04-26 15:44:56.535635] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1404f40, cid 4, qid 0 00:26:26.385 [2024-04-26 15:44:56.535640] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1405200, cid 6, qid 0 00:26:26.385 [2024-04-26 15:44:56.535645] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1405360, cid 7, qid 0 00:26:26.385 [2024-04-26 15:44:56.535780] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:26.385 [2024-04-26 15:44:56.535788] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:26.385 [2024-04-26 15:44:56.535791] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:26.385 [2024-04-26 15:44:56.535795] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13bc300): datao=0, datal=8192, cccid=5 00:26:26.385 [2024-04-26 15:44:56.535800] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x14050a0) on tqpair(0x13bc300): expected_datao=0, payload_size=8192 00:26:26.385 [2024-04-26 15:44:56.535805] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.385 [2024-04-26 15:44:56.535822] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:26.385 [2024-04-26 15:44:56.535827] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:26.385 [2024-04-26 15:44:56.535833] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:26.385 [2024-04-26 15:44:56.535839] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:26.385 [2024-04-26 15:44:56.535843] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:26.385 [2024-04-26 15:44:56.535847] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13bc300): datao=0, datal=512, cccid=4 00:26:26.385 [2024-04-26 15:44:56.535852] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1404f40) on tqpair(0x13bc300): expected_datao=0, payload_size=512 00:26:26.385 [2024-04-26 15:44:56.535857] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.385 [2024-04-26 15:44:56.535863] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:26.385 [2024-04-26 15:44:56.535867] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:26.385 [2024-04-26 15:44:56.535873] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:26.385 [2024-04-26 15:44:56.535879] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:26.385 [2024-04-26 15:44:56.535883] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:26.385 [2024-04-26 15:44:56.535887] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13bc300): datao=0, datal=512, cccid=6 00:26:26.385 [2024-04-26 15:44:56.535892] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1405200) on tqpair(0x13bc300): expected_datao=0, payload_size=512 00:26:26.385 [2024-04-26 15:44:56.535896] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.385 [2024-04-26 15:44:56.535903] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:26.385 [2024-04-26 15:44:56.535906] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:26.385 [2024-04-26 15:44:56.535912] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:26:26.385 [2024-04-26 15:44:56.535918] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:26:26.385 [2024-04-26 15:44:56.535922] nvme_tcp.c:1707:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:26:26.385 [2024-04-26 15:44:56.535926] nvme_tcp.c:1708:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x13bc300): datao=0, datal=4096, cccid=7 00:26:26.385 [2024-04-26 15:44:56.535930] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1405360) on tqpair(0x13bc300): expected_datao=0, payload_size=4096 00:26:26.385 [2024-04-26 15:44:56.535935] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.385 [2024-04-26 15:44:56.535942] nvme_tcp.c:1509:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:26:26.385 [2024-04-26 15:44:56.535946] nvme_tcp.c:1293:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:26:26.385 [2024-04-26 15:44:56.535954] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.385 [2024-04-26 15:44:56.535960] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.385 [2024-04-26 15:44:56.535964] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.385 [2024-04-26 15:44:56.535968] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x14050a0) on tqpair=0x13bc300 00:26:26.385 [2024-04-26 15:44:56.535987] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.385 [2024-04-26 15:44:56.535994] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.385 [2024-04-26 15:44:56.535997] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.385 [2024-04-26 15:44:56.536001] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1404f40) on tqpair=0x13bc300 00:26:26.385 [2024-04-26 15:44:56.536014] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.385 [2024-04-26 15:44:56.536020] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.385 [2024-04-26 15:44:56.536024] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.385 [2024-04-26 15:44:56.536028] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1405200) on tqpair=0x13bc300 00:26:26.385 ===================================================== 00:26:26.385 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:26.385 ===================================================== 00:26:26.385 Controller Capabilities/Features 00:26:26.385 ================================ 00:26:26.386 Vendor ID: 8086 00:26:26.386 Subsystem Vendor ID: 8086 00:26:26.386 Serial Number: SPDK00000000000001 00:26:26.386 Model Number: SPDK bdev Controller 00:26:26.386 Firmware Version: 24.05 00:26:26.386 Recommended Arb Burst: 6 00:26:26.386 IEEE OUI Identifier: e4 d2 5c 00:26:26.386 Multi-path I/O 00:26:26.386 May have multiple subsystem ports: Yes 00:26:26.386 May have multiple controllers: Yes 00:26:26.386 Associated with SR-IOV VF: No 00:26:26.386 Max Data Transfer Size: 131072 00:26:26.386 Max Number of Namespaces: 32 00:26:26.386 Max Number of I/O Queues: 127 00:26:26.386 NVMe Specification Version (VS): 1.3 00:26:26.386 NVMe Specification Version (Identify): 1.3 00:26:26.386 Maximum Queue Entries: 128 00:26:26.386 Contiguous Queues Required: Yes 00:26:26.386 Arbitration Mechanisms Supported 00:26:26.386 Weighted Round Robin: Not Supported 00:26:26.386 Vendor Specific: Not Supported 00:26:26.386 Reset Timeout: 15000 ms 00:26:26.386 Doorbell Stride: 4 bytes 00:26:26.386 NVM Subsystem Reset: Not Supported 00:26:26.386 Command Sets Supported 00:26:26.386 NVM Command Set: Supported 00:26:26.386 Boot Partition: Not Supported 00:26:26.386 Memory Page Size Minimum: 4096 bytes 00:26:26.386 Memory Page Size Maximum: 4096 bytes 00:26:26.386 Persistent Memory Region: Not Supported 00:26:26.386 Optional Asynchronous Events Supported 00:26:26.386 Namespace Attribute Notices: Supported 00:26:26.386 Firmware Activation Notices: Not Supported 00:26:26.386 ANA Change Notices: Not Supported 00:26:26.386 PLE Aggregate Log Change Notices: Not Supported 00:26:26.386 LBA Status Info Alert Notices: Not Supported 00:26:26.386 EGE Aggregate Log Change Notices: Not Supported 00:26:26.386 Normal NVM Subsystem Shutdown event: Not Supported 00:26:26.386 Zone Descriptor Change Notices: Not Supported 00:26:26.386 Discovery Log Change Notices: Not Supported 00:26:26.386 Controller Attributes 00:26:26.386 128-bit Host Identifier: Supported 00:26:26.386 Non-Operational Permissive Mode: Not Supported 00:26:26.386 NVM Sets: Not Supported 00:26:26.386 Read Recovery Levels: Not Supported 00:26:26.386 Endurance Groups: Not Supported 00:26:26.386 Predictable Latency Mode: Not Supported 00:26:26.386 Traffic Based Keep ALive: Not Supported 00:26:26.386 Namespace Granularity: Not Supported 00:26:26.386 SQ Associations: Not Supported 00:26:26.386 UUID List: Not Supported 00:26:26.386 Multi-Domain Subsystem: Not Supported 00:26:26.386 Fixed Capacity Management: Not Supported 00:26:26.386 Variable Capacity Management: Not Supported 00:26:26.386 Delete Endurance Group: Not Supported 00:26:26.386 Delete NVM Set: Not Supported 00:26:26.386 Extended LBA Formats Supported: Not Supported 00:26:26.386 Flexible Data Placement Supported: Not Supported 00:26:26.386 00:26:26.386 Controller Memory Buffer Support 00:26:26.386 ================================ 00:26:26.386 Supported: No 00:26:26.386 00:26:26.386 Persistent Memory Region Support 00:26:26.386 ================================ 00:26:26.386 Supported: No 00:26:26.386 00:26:26.386 Admin Command Set Attributes 00:26:26.386 ============================ 00:26:26.386 Security Send/Receive: Not Supported 00:26:26.386 Format NVM: Not Supported 00:26:26.386 Firmware Activate/Download: Not Supported 00:26:26.386 Namespace Management: Not Supported 00:26:26.386 Device Self-Test: Not Supported 00:26:26.386 Directives: Not Supported 00:26:26.386 NVMe-MI: Not Supported 00:26:26.386 Virtualization Management: Not Supported 00:26:26.386 Doorbell Buffer Config: Not Supported 00:26:26.386 Get LBA Status Capability: Not Supported 00:26:26.386 Command & Feature Lockdown Capability: Not Supported 00:26:26.386 Abort Command Limit: 4 00:26:26.386 Async Event Request Limit: 4 00:26:26.386 Number of Firmware Slots: N/A 00:26:26.386 Firmware Slot 1 Read-Only: N/A 00:26:26.386 Firmware Activation Without Reset: N/A 00:26:26.386 Multiple Update Detection Support: N/A 00:26:26.386 Firmware Update Granularity: No Information Provided 00:26:26.386 Per-Namespace SMART Log: No 00:26:26.386 Asymmetric Namespace Access Log Page: Not Supported 00:26:26.386 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:26:26.386 Command Effects Log Page: Supported 00:26:26.386 Get Log Page Extended Data: Supported 00:26:26.386 Telemetry Log Pages: Not Supported 00:26:26.386 Persistent Event Log Pages: Not Supported 00:26:26.386 Supported Log Pages Log Page: May Support 00:26:26.386 Commands Supported & Effects Log Page: Not Supported 00:26:26.386 Feature Identifiers & Effects Log Page:May Support 00:26:26.386 NVMe-MI Commands & Effects Log Page: May Support 00:26:26.386 Data Area 4 for Telemetry Log: Not Supported 00:26:26.386 Error Log Page Entries Supported: 128 00:26:26.386 Keep Alive: Supported 00:26:26.386 Keep Alive Granularity: 10000 ms 00:26:26.386 00:26:26.386 NVM Command Set Attributes 00:26:26.386 ========================== 00:26:26.386 Submission Queue Entry Size 00:26:26.386 Max: 64 00:26:26.386 Min: 64 00:26:26.386 Completion Queue Entry Size 00:26:26.386 Max: 16 00:26:26.386 Min: 16 00:26:26.386 Number of Namespaces: 32 00:26:26.386 Compare Command: Supported 00:26:26.386 Write Uncorrectable Command: Not Supported 00:26:26.386 Dataset Management Command: Supported 00:26:26.386 Write Zeroes Command: Supported 00:26:26.386 Set Features Save Field: Not Supported 00:26:26.386 Reservations: Supported 00:26:26.386 Timestamp: Not Supported 00:26:26.386 Copy: Supported 00:26:26.386 Volatile Write Cache: Present 00:26:26.386 Atomic Write Unit (Normal): 1 00:26:26.386 Atomic Write Unit (PFail): 1 00:26:26.386 Atomic Compare & Write Unit: 1 00:26:26.386 Fused Compare & Write: Supported 00:26:26.386 Scatter-Gather List 00:26:26.386 SGL Command Set: Supported 00:26:26.386 SGL Keyed: Supported 00:26:26.386 SGL Bit Bucket Descriptor: Not Supported 00:26:26.386 SGL Metadata Pointer: Not Supported 00:26:26.386 Oversized SGL: Not Supported 00:26:26.386 SGL Metadata Address: Not Supported 00:26:26.386 SGL Offset: Supported 00:26:26.386 Transport SGL Data Block: Not Supported 00:26:26.386 Replay Protected Memory Block: Not Supported 00:26:26.386 00:26:26.386 Firmware Slot Information 00:26:26.386 ========================= 00:26:26.386 Active slot: 1 00:26:26.386 Slot 1 Firmware Revision: 24.05 00:26:26.386 00:26:26.386 00:26:26.386 Commands Supported and Effects 00:26:26.386 ============================== 00:26:26.386 Admin Commands 00:26:26.386 -------------- 00:26:26.386 Get Log Page (02h): Supported 00:26:26.386 Identify (06h): Supported 00:26:26.386 Abort (08h): Supported 00:26:26.386 Set Features (09h): Supported 00:26:26.386 Get Features (0Ah): Supported 00:26:26.386 Asynchronous Event Request (0Ch): Supported 00:26:26.386 Keep Alive (18h): Supported 00:26:26.386 I/O Commands 00:26:26.386 ------------ 00:26:26.386 Flush (00h): Supported LBA-Change 00:26:26.386 Write (01h): Supported LBA-Change 00:26:26.386 Read (02h): Supported 00:26:26.386 Compare (05h): Supported 00:26:26.386 Write Zeroes (08h): Supported LBA-Change 00:26:26.386 Dataset Management (09h): Supported LBA-Change 00:26:26.386 Copy (19h): Supported LBA-Change 00:26:26.386 Unknown (79h): Supported LBA-Change 00:26:26.386 Unknown (7Ah): Supported 00:26:26.386 00:26:26.386 Error Log 00:26:26.386 ========= 00:26:26.386 00:26:26.386 Arbitration 00:26:26.386 =========== 00:26:26.386 Arbitration Burst: 1 00:26:26.386 00:26:26.386 Power Management 00:26:26.386 ================ 00:26:26.386 Number of Power States: 1 00:26:26.386 Current Power State: Power State #0 00:26:26.386 Power State #0: 00:26:26.386 Max Power: 0.00 W 00:26:26.386 Non-Operational State: Operational 00:26:26.386 Entry Latency: Not Reported 00:26:26.386 Exit Latency: Not Reported 00:26:26.386 Relative Read Throughput: 0 00:26:26.386 Relative Read Latency: 0 00:26:26.386 Relative Write Throughput: 0 00:26:26.386 Relative Write Latency: 0 00:26:26.386 Idle Power: Not Reported 00:26:26.386 Active Power: Not Reported 00:26:26.386 Non-Operational Permissive Mode: Not Supported 00:26:26.386 00:26:26.386 Health Information 00:26:26.386 ================== 00:26:26.386 Critical Warnings: 00:26:26.386 Available Spare Space: OK 00:26:26.386 Temperature: OK 00:26:26.386 Device Reliability: OK 00:26:26.386 Read Only: No 00:26:26.386 Volatile Memory Backup: OK 00:26:26.386 Current Temperature: 0 Kelvin (-273 Celsius) 00:26:26.386 Temperature Threshold: [2024-04-26 15:44:56.536036] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.387 [2024-04-26 15:44:56.536043] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.387 [2024-04-26 15:44:56.536046] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.387 [2024-04-26 15:44:56.536050] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1405360) on tqpair=0x13bc300 00:26:26.387 [2024-04-26 15:44:56.536182] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.387 [2024-04-26 15:44:56.536190] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x13bc300) 00:26:26.387 [2024-04-26 15:44:56.536200] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.387 [2024-04-26 15:44:56.536236] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1405360, cid 7, qid 0 00:26:26.387 [2024-04-26 15:44:56.536311] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.387 [2024-04-26 15:44:56.536318] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.387 [2024-04-26 15:44:56.536322] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.387 [2024-04-26 15:44:56.536326] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1405360) on tqpair=0x13bc300 00:26:26.387 [2024-04-26 15:44:56.536374] nvme_ctrlr.c:4221:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:26:26.387 [2024-04-26 15:44:56.536389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.387 [2024-04-26 15:44:56.536397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.387 [2024-04-26 15:44:56.536404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.387 [2024-04-26 15:44:56.536411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:26.387 [2024-04-26 15:44:56.536420] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.387 [2024-04-26 15:44:56.536425] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.387 [2024-04-26 15:44:56.536429] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13bc300) 00:26:26.387 [2024-04-26 15:44:56.536437] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.387 [2024-04-26 15:44:56.536462] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1404de0, cid 3, qid 0 00:26:26.387 [2024-04-26 15:44:56.536526] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.387 [2024-04-26 15:44:56.536532] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.387 [2024-04-26 15:44:56.536536] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.387 [2024-04-26 15:44:56.536540] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1404de0) on tqpair=0x13bc300 00:26:26.387 [2024-04-26 15:44:56.536550] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.387 [2024-04-26 15:44:56.536554] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.387 [2024-04-26 15:44:56.536558] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13bc300) 00:26:26.387 [2024-04-26 15:44:56.536566] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.387 [2024-04-26 15:44:56.536588] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1404de0, cid 3, qid 0 00:26:26.387 [2024-04-26 15:44:56.536669] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.387 [2024-04-26 15:44:56.536676] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.387 [2024-04-26 15:44:56.536679] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.387 [2024-04-26 15:44:56.536684] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1404de0) on tqpair=0x13bc300 00:26:26.387 [2024-04-26 15:44:56.536690] nvme_ctrlr.c:1082:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:26:26.387 [2024-04-26 15:44:56.536695] nvme_ctrlr.c:1085:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:26:26.387 [2024-04-26 15:44:56.536705] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.387 [2024-04-26 15:44:56.536710] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.387 [2024-04-26 15:44:56.536714] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13bc300) 00:26:26.387 [2024-04-26 15:44:56.536722] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.387 [2024-04-26 15:44:56.536740] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1404de0, cid 3, qid 0 00:26:26.387 [2024-04-26 15:44:56.536800] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.387 [2024-04-26 15:44:56.536807] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.387 [2024-04-26 15:44:56.536811] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.387 [2024-04-26 15:44:56.536815] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1404de0) on tqpair=0x13bc300 00:26:26.387 [2024-04-26 15:44:56.536828] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.387 [2024-04-26 15:44:56.536833] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.387 [2024-04-26 15:44:56.536837] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13bc300) 00:26:26.387 [2024-04-26 15:44:56.536844] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.387 [2024-04-26 15:44:56.536862] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1404de0, cid 3, qid 0 00:26:26.387 [2024-04-26 15:44:56.536919] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.387 [2024-04-26 15:44:56.536926] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.387 [2024-04-26 15:44:56.536930] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.387 [2024-04-26 15:44:56.536934] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1404de0) on tqpair=0x13bc300 00:26:26.387 [2024-04-26 15:44:56.536945] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.387 [2024-04-26 15:44:56.536950] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.387 [2024-04-26 15:44:56.536954] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13bc300) 00:26:26.387 [2024-04-26 15:44:56.536962] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.387 [2024-04-26 15:44:56.536980] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1404de0, cid 3, qid 0 00:26:26.387 [2024-04-26 15:44:56.537038] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.387 [2024-04-26 15:44:56.537045] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.387 [2024-04-26 15:44:56.537049] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.387 [2024-04-26 15:44:56.537053] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1404de0) on tqpair=0x13bc300 00:26:26.387 [2024-04-26 15:44:56.537064] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.387 [2024-04-26 15:44:56.537069] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.387 [2024-04-26 15:44:56.537073] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13bc300) 00:26:26.387 [2024-04-26 15:44:56.537081] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.387 [2024-04-26 15:44:56.537098] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1404de0, cid 3, qid 0 00:26:26.387 [2024-04-26 15:44:56.537172] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.387 [2024-04-26 15:44:56.537181] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.387 [2024-04-26 15:44:56.537185] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.387 [2024-04-26 15:44:56.537189] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1404de0) on tqpair=0x13bc300 00:26:26.387 [2024-04-26 15:44:56.537202] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.387 [2024-04-26 15:44:56.537211] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.387 [2024-04-26 15:44:56.537217] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13bc300) 00:26:26.387 [2024-04-26 15:44:56.537229] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.387 [2024-04-26 15:44:56.537253] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1404de0, cid 3, qid 0 00:26:26.387 [2024-04-26 15:44:56.537313] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.387 [2024-04-26 15:44:56.537325] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.387 [2024-04-26 15:44:56.537330] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.387 [2024-04-26 15:44:56.537334] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1404de0) on tqpair=0x13bc300 00:26:26.388 [2024-04-26 15:44:56.537347] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.388 [2024-04-26 15:44:56.537352] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.388 [2024-04-26 15:44:56.537356] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13bc300) 00:26:26.388 [2024-04-26 15:44:56.537364] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.388 [2024-04-26 15:44:56.537384] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1404de0, cid 3, qid 0 00:26:26.388 [2024-04-26 15:44:56.537442] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.388 [2024-04-26 15:44:56.537449] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.388 [2024-04-26 15:44:56.537453] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.388 [2024-04-26 15:44:56.537457] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1404de0) on tqpair=0x13bc300 00:26:26.388 [2024-04-26 15:44:56.537469] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.388 [2024-04-26 15:44:56.537474] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.388 [2024-04-26 15:44:56.537478] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13bc300) 00:26:26.388 [2024-04-26 15:44:56.537485] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.388 [2024-04-26 15:44:56.537503] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1404de0, cid 3, qid 0 00:26:26.388 [2024-04-26 15:44:56.537558] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.388 [2024-04-26 15:44:56.537565] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.388 [2024-04-26 15:44:56.537569] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.388 [2024-04-26 15:44:56.537573] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1404de0) on tqpair=0x13bc300 00:26:26.388 [2024-04-26 15:44:56.537584] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.388 [2024-04-26 15:44:56.537589] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.388 [2024-04-26 15:44:56.537593] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13bc300) 00:26:26.388 [2024-04-26 15:44:56.537600] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.388 [2024-04-26 15:44:56.537618] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1404de0, cid 3, qid 0 00:26:26.388 [2024-04-26 15:44:56.537678] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.388 [2024-04-26 15:44:56.537685] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.388 [2024-04-26 15:44:56.537689] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.388 [2024-04-26 15:44:56.537693] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1404de0) on tqpair=0x13bc300 00:26:26.388 [2024-04-26 15:44:56.537705] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.388 [2024-04-26 15:44:56.537710] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.388 [2024-04-26 15:44:56.537714] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13bc300) 00:26:26.388 [2024-04-26 15:44:56.537721] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.388 [2024-04-26 15:44:56.537739] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1404de0, cid 3, qid 0 00:26:26.388 [2024-04-26 15:44:56.537796] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.388 [2024-04-26 15:44:56.537802] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.388 [2024-04-26 15:44:56.537806] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.388 [2024-04-26 15:44:56.537810] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1404de0) on tqpair=0x13bc300 00:26:26.388 [2024-04-26 15:44:56.537821] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.388 [2024-04-26 15:44:56.537826] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.388 [2024-04-26 15:44:56.537830] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13bc300) 00:26:26.388 [2024-04-26 15:44:56.537838] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.388 [2024-04-26 15:44:56.537856] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1404de0, cid 3, qid 0 00:26:26.388 [2024-04-26 15:44:56.537915] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.388 [2024-04-26 15:44:56.537922] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.388 [2024-04-26 15:44:56.537926] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.388 [2024-04-26 15:44:56.537930] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1404de0) on tqpair=0x13bc300 00:26:26.388 [2024-04-26 15:44:56.537941] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.388 [2024-04-26 15:44:56.537947] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.388 [2024-04-26 15:44:56.537951] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13bc300) 00:26:26.388 [2024-04-26 15:44:56.537959] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.388 [2024-04-26 15:44:56.537977] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1404de0, cid 3, qid 0 00:26:26.388 [2024-04-26 15:44:56.538033] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.388 [2024-04-26 15:44:56.538040] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.388 [2024-04-26 15:44:56.538044] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.388 [2024-04-26 15:44:56.538048] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1404de0) on tqpair=0x13bc300 00:26:26.388 [2024-04-26 15:44:56.538059] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.388 [2024-04-26 15:44:56.538064] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.388 [2024-04-26 15:44:56.538068] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13bc300) 00:26:26.388 [2024-04-26 15:44:56.538075] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.388 [2024-04-26 15:44:56.538093] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1404de0, cid 3, qid 0 00:26:26.388 [2024-04-26 15:44:56.538164] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.388 [2024-04-26 15:44:56.538172] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.388 [2024-04-26 15:44:56.538176] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.388 [2024-04-26 15:44:56.538180] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1404de0) on tqpair=0x13bc300 00:26:26.388 [2024-04-26 15:44:56.538192] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.388 [2024-04-26 15:44:56.538198] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.388 [2024-04-26 15:44:56.538203] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13bc300) 00:26:26.388 [2024-04-26 15:44:56.538215] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.388 [2024-04-26 15:44:56.538243] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1404de0, cid 3, qid 0 00:26:26.388 [2024-04-26 15:44:56.538302] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.388 [2024-04-26 15:44:56.538314] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.388 [2024-04-26 15:44:56.538318] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.388 [2024-04-26 15:44:56.538323] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1404de0) on tqpair=0x13bc300 00:26:26.388 [2024-04-26 15:44:56.538335] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.388 [2024-04-26 15:44:56.538340] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.388 [2024-04-26 15:44:56.538344] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13bc300) 00:26:26.388 [2024-04-26 15:44:56.538352] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.388 [2024-04-26 15:44:56.538371] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1404de0, cid 3, qid 0 00:26:26.388 [2024-04-26 15:44:56.538431] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.388 [2024-04-26 15:44:56.538447] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.388 [2024-04-26 15:44:56.538452] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.388 [2024-04-26 15:44:56.538457] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1404de0) on tqpair=0x13bc300 00:26:26.388 [2024-04-26 15:44:56.538469] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.388 [2024-04-26 15:44:56.538474] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.388 [2024-04-26 15:44:56.538478] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13bc300) 00:26:26.388 [2024-04-26 15:44:56.538486] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.388 [2024-04-26 15:44:56.538505] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1404de0, cid 3, qid 0 00:26:26.388 [2024-04-26 15:44:56.538565] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.388 [2024-04-26 15:44:56.538572] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.388 [2024-04-26 15:44:56.538576] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.388 [2024-04-26 15:44:56.538580] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1404de0) on tqpair=0x13bc300 00:26:26.388 [2024-04-26 15:44:56.538592] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.388 [2024-04-26 15:44:56.538597] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.388 [2024-04-26 15:44:56.538601] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13bc300) 00:26:26.388 [2024-04-26 15:44:56.538609] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.388 [2024-04-26 15:44:56.538627] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1404de0, cid 3, qid 0 00:26:26.388 [2024-04-26 15:44:56.538682] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.388 [2024-04-26 15:44:56.538689] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.388 [2024-04-26 15:44:56.538693] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.388 [2024-04-26 15:44:56.538697] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1404de0) on tqpair=0x13bc300 00:26:26.388 [2024-04-26 15:44:56.538709] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.388 [2024-04-26 15:44:56.538714] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.388 [2024-04-26 15:44:56.538718] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13bc300) 00:26:26.388 [2024-04-26 15:44:56.538725] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.389 [2024-04-26 15:44:56.538743] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1404de0, cid 3, qid 0 00:26:26.389 [2024-04-26 15:44:56.538800] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.389 [2024-04-26 15:44:56.538814] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.389 [2024-04-26 15:44:56.538819] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.389 [2024-04-26 15:44:56.538823] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1404de0) on tqpair=0x13bc300 00:26:26.389 [2024-04-26 15:44:56.538835] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.389 [2024-04-26 15:44:56.538841] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.389 [2024-04-26 15:44:56.538845] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13bc300) 00:26:26.389 [2024-04-26 15:44:56.538852] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.389 [2024-04-26 15:44:56.538871] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1404de0, cid 3, qid 0 00:26:26.389 [2024-04-26 15:44:56.538933] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.389 [2024-04-26 15:44:56.538940] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.389 [2024-04-26 15:44:56.538944] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.389 [2024-04-26 15:44:56.538948] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1404de0) on tqpair=0x13bc300 00:26:26.389 [2024-04-26 15:44:56.538959] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.389 [2024-04-26 15:44:56.538965] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.389 [2024-04-26 15:44:56.538968] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13bc300) 00:26:26.389 [2024-04-26 15:44:56.538976] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.389 [2024-04-26 15:44:56.538994] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1404de0, cid 3, qid 0 00:26:26.389 [2024-04-26 15:44:56.539055] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.389 [2024-04-26 15:44:56.539062] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.389 [2024-04-26 15:44:56.539066] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.389 [2024-04-26 15:44:56.539070] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1404de0) on tqpair=0x13bc300 00:26:26.389 [2024-04-26 15:44:56.539081] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.389 [2024-04-26 15:44:56.539086] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.389 [2024-04-26 15:44:56.539090] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13bc300) 00:26:26.389 [2024-04-26 15:44:56.539098] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.389 [2024-04-26 15:44:56.539116] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1404de0, cid 3, qid 0 00:26:26.389 [2024-04-26 15:44:56.539185] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.389 [2024-04-26 15:44:56.539193] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.389 [2024-04-26 15:44:56.539198] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.389 [2024-04-26 15:44:56.539204] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1404de0) on tqpair=0x13bc300 00:26:26.389 [2024-04-26 15:44:56.539223] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.389 [2024-04-26 15:44:56.539232] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.389 [2024-04-26 15:44:56.539236] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13bc300) 00:26:26.389 [2024-04-26 15:44:56.539244] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.389 [2024-04-26 15:44:56.539266] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1404de0, cid 3, qid 0 00:26:26.389 [2024-04-26 15:44:56.539326] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.389 [2024-04-26 15:44:56.539333] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.389 [2024-04-26 15:44:56.539336] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.389 [2024-04-26 15:44:56.539341] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1404de0) on tqpair=0x13bc300 00:26:26.389 [2024-04-26 15:44:56.539352] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.389 [2024-04-26 15:44:56.539357] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.389 [2024-04-26 15:44:56.539361] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13bc300) 00:26:26.389 [2024-04-26 15:44:56.539369] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.389 [2024-04-26 15:44:56.539387] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1404de0, cid 3, qid 0 00:26:26.389 [2024-04-26 15:44:56.539446] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.389 [2024-04-26 15:44:56.539453] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.389 [2024-04-26 15:44:56.539457] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.389 [2024-04-26 15:44:56.539461] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1404de0) on tqpair=0x13bc300 00:26:26.389 [2024-04-26 15:44:56.539473] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.389 [2024-04-26 15:44:56.539478] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.389 [2024-04-26 15:44:56.539481] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13bc300) 00:26:26.389 [2024-04-26 15:44:56.539489] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.389 [2024-04-26 15:44:56.539507] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1404de0, cid 3, qid 0 00:26:26.389 [2024-04-26 15:44:56.539563] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.389 [2024-04-26 15:44:56.539570] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.389 [2024-04-26 15:44:56.539574] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.389 [2024-04-26 15:44:56.539578] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1404de0) on tqpair=0x13bc300 00:26:26.389 [2024-04-26 15:44:56.539590] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.389 [2024-04-26 15:44:56.539594] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.389 [2024-04-26 15:44:56.539598] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13bc300) 00:26:26.389 [2024-04-26 15:44:56.539606] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.389 [2024-04-26 15:44:56.539624] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1404de0, cid 3, qid 0 00:26:26.389 [2024-04-26 15:44:56.539683] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.389 [2024-04-26 15:44:56.539695] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.389 [2024-04-26 15:44:56.539699] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.389 [2024-04-26 15:44:56.539703] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1404de0) on tqpair=0x13bc300 00:26:26.389 [2024-04-26 15:44:56.539715] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.389 [2024-04-26 15:44:56.539720] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.389 [2024-04-26 15:44:56.539724] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13bc300) 00:26:26.389 [2024-04-26 15:44:56.539732] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.389 [2024-04-26 15:44:56.539751] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1404de0, cid 3, qid 0 00:26:26.389 [2024-04-26 15:44:56.539808] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.389 [2024-04-26 15:44:56.539815] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.389 [2024-04-26 15:44:56.539819] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.389 [2024-04-26 15:44:56.539824] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1404de0) on tqpair=0x13bc300 00:26:26.389 [2024-04-26 15:44:56.539835] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.389 [2024-04-26 15:44:56.539840] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.389 [2024-04-26 15:44:56.539844] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13bc300) 00:26:26.389 [2024-04-26 15:44:56.539852] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.389 [2024-04-26 15:44:56.539870] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1404de0, cid 3, qid 0 00:26:26.389 [2024-04-26 15:44:56.539929] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.389 [2024-04-26 15:44:56.539936] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.389 [2024-04-26 15:44:56.539939] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.389 [2024-04-26 15:44:56.539944] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1404de0) on tqpair=0x13bc300 00:26:26.389 [2024-04-26 15:44:56.539955] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.389 [2024-04-26 15:44:56.539960] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.389 [2024-04-26 15:44:56.539964] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13bc300) 00:26:26.389 [2024-04-26 15:44:56.539971] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.389 [2024-04-26 15:44:56.539989] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1404de0, cid 3, qid 0 00:26:26.389 [2024-04-26 15:44:56.540050] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.389 [2024-04-26 15:44:56.540057] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.389 [2024-04-26 15:44:56.540061] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.389 [2024-04-26 15:44:56.540065] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1404de0) on tqpair=0x13bc300 00:26:26.389 [2024-04-26 15:44:56.540077] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.389 [2024-04-26 15:44:56.540081] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.389 [2024-04-26 15:44:56.540085] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13bc300) 00:26:26.389 [2024-04-26 15:44:56.540093] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.389 [2024-04-26 15:44:56.540111] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1404de0, cid 3, qid 0 00:26:26.389 [2024-04-26 15:44:56.544160] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.389 [2024-04-26 15:44:56.544183] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.389 [2024-04-26 15:44:56.544188] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.389 [2024-04-26 15:44:56.544193] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1404de0) on tqpair=0x13bc300 00:26:26.390 [2024-04-26 15:44:56.544209] nvme_tcp.c: 766:nvme_tcp_build_contig_request: *DEBUG*: enter 00:26:26.390 [2024-04-26 15:44:56.544214] nvme_tcp.c: 949:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:26:26.390 [2024-04-26 15:44:56.544218] nvme_tcp.c: 958:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x13bc300) 00:26:26.390 [2024-04-26 15:44:56.544227] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:26.390 [2024-04-26 15:44:56.544255] nvme_tcp.c: 923:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1404de0, cid 3, qid 0 00:26:26.390 [2024-04-26 15:44:56.544324] nvme_tcp.c:1161:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:26:26.390 [2024-04-26 15:44:56.544331] nvme_tcp.c:1963:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:26:26.390 [2024-04-26 15:44:56.544335] nvme_tcp.c:1636:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:26:26.390 [2024-04-26 15:44:56.544351] nvme_tcp.c: 908:nvme_tcp_req_complete_safe: *DEBUG*: complete tcp_req(0x1404de0) on tqpair=0x13bc300 00:26:26.390 [2024-04-26 15:44:56.544361] nvme_ctrlr.c:1204:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 7 milliseconds 00:26:26.390 0 Kelvin (-273 Celsius) 00:26:26.390 Available Spare: 0% 00:26:26.390 Available Spare Threshold: 0% 00:26:26.390 Life Percentage Used: 0% 00:26:26.390 Data Units Read: 0 00:26:26.390 Data Units Written: 0 00:26:26.390 Host Read Commands: 0 00:26:26.390 Host Write Commands: 0 00:26:26.390 Controller Busy Time: 0 minutes 00:26:26.390 Power Cycles: 0 00:26:26.390 Power On Hours: 0 hours 00:26:26.390 Unsafe Shutdowns: 0 00:26:26.390 Unrecoverable Media Errors: 0 00:26:26.390 Lifetime Error Log Entries: 0 00:26:26.390 Warning Temperature Time: 0 minutes 00:26:26.390 Critical Temperature Time: 0 minutes 00:26:26.390 00:26:26.390 Number of Queues 00:26:26.390 ================ 00:26:26.390 Number of I/O Submission Queues: 127 00:26:26.390 Number of I/O Completion Queues: 127 00:26:26.390 00:26:26.390 Active Namespaces 00:26:26.390 ================= 00:26:26.390 Namespace ID:1 00:26:26.390 Error Recovery Timeout: Unlimited 00:26:26.390 Command Set Identifier: NVM (00h) 00:26:26.390 Deallocate: Supported 00:26:26.390 Deallocated/Unwritten Error: Not Supported 00:26:26.390 Deallocated Read Value: Unknown 00:26:26.390 Deallocate in Write Zeroes: Not Supported 00:26:26.390 Deallocated Guard Field: 0xFFFF 00:26:26.390 Flush: Supported 00:26:26.390 Reservation: Supported 00:26:26.390 Namespace Sharing Capabilities: Multiple Controllers 00:26:26.390 Size (in LBAs): 131072 (0GiB) 00:26:26.390 Capacity (in LBAs): 131072 (0GiB) 00:26:26.390 Utilization (in LBAs): 131072 (0GiB) 00:26:26.390 NGUID: ABCDEF0123456789ABCDEF0123456789 00:26:26.390 EUI64: ABCDEF0123456789 00:26:26.390 UUID: 0e6b32b8-19d6-4ac1-9e7a-289bae09eb7b 00:26:26.390 Thin Provisioning: Not Supported 00:26:26.390 Per-NS Atomic Units: Yes 00:26:26.390 Atomic Boundary Size (Normal): 0 00:26:26.390 Atomic Boundary Size (PFail): 0 00:26:26.390 Atomic Boundary Offset: 0 00:26:26.390 Maximum Single Source Range Length: 65535 00:26:26.390 Maximum Copy Length: 65535 00:26:26.390 Maximum Source Range Count: 1 00:26:26.390 NGUID/EUI64 Never Reused: No 00:26:26.390 Namespace Write Protected: No 00:26:26.390 Number of LBA Formats: 1 00:26:26.390 Current LBA Format: LBA Format #00 00:26:26.390 LBA Format #00: Data Size: 512 Metadata Size: 0 00:26:26.390 00:26:26.390 15:44:56 -- host/identify.sh@51 -- # sync 00:26:26.390 15:44:56 -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:26.390 15:44:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:26.390 15:44:56 -- common/autotest_common.sh@10 -- # set +x 00:26:26.390 15:44:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:26.390 15:44:56 -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:26:26.390 15:44:56 -- host/identify.sh@56 -- # nvmftestfini 00:26:26.390 15:44:56 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:26.390 15:44:56 -- nvmf/common.sh@117 -- # sync 00:26:26.390 15:44:56 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:26.390 15:44:56 -- nvmf/common.sh@120 -- # set +e 00:26:26.390 15:44:56 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:26.390 15:44:56 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:26.390 rmmod nvme_tcp 00:26:26.390 rmmod nvme_fabrics 00:26:26.390 rmmod nvme_keyring 00:26:26.390 15:44:56 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:26.648 15:44:56 -- nvmf/common.sh@124 -- # set -e 00:26:26.648 15:44:56 -- nvmf/common.sh@125 -- # return 0 00:26:26.648 15:44:56 -- nvmf/common.sh@478 -- # '[' -n 80349 ']' 00:26:26.648 15:44:56 -- nvmf/common.sh@479 -- # killprocess 80349 00:26:26.648 15:44:56 -- common/autotest_common.sh@936 -- # '[' -z 80349 ']' 00:26:26.648 15:44:56 -- common/autotest_common.sh@940 -- # kill -0 80349 00:26:26.648 15:44:56 -- common/autotest_common.sh@941 -- # uname 00:26:26.648 15:44:56 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:26.648 15:44:56 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80349 00:26:26.648 15:44:56 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:26.648 15:44:56 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:26.648 killing process with pid 80349 00:26:26.648 15:44:56 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80349' 00:26:26.648 15:44:56 -- common/autotest_common.sh@955 -- # kill 80349 00:26:26.648 [2024-04-26 15:44:56.694465] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:26:26.648 15:44:56 -- common/autotest_common.sh@960 -- # wait 80349 00:26:26.906 15:44:56 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:26.906 15:44:56 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:26.906 15:44:56 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:26.906 15:44:56 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:26.906 15:44:56 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:26.906 15:44:56 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:26.906 15:44:56 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:26.906 15:44:56 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:26.906 15:44:56 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:26:26.906 ************************************ 00:26:26.906 END TEST nvmf_identify 00:26:26.906 ************************************ 00:26:26.906 00:26:26.906 real 0m2.680s 00:26:26.906 user 0m7.405s 00:26:26.906 sys 0m0.653s 00:26:26.906 15:44:57 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:26.906 15:44:57 -- common/autotest_common.sh@10 -- # set +x 00:26:26.906 15:44:57 -- nvmf/nvmf.sh@96 -- # run_test nvmf_perf /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:26:26.906 15:44:57 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:26.906 15:44:57 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:26.906 15:44:57 -- common/autotest_common.sh@10 -- # set +x 00:26:26.906 ************************************ 00:26:26.906 START TEST nvmf_perf 00:26:26.906 ************************************ 00:26:26.906 15:44:57 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/perf.sh --transport=tcp 00:26:26.906 * Looking for test storage... 00:26:26.906 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:27.165 15:44:57 -- host/perf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:27.165 15:44:57 -- nvmf/common.sh@7 -- # uname -s 00:26:27.165 15:44:57 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:27.165 15:44:57 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:27.165 15:44:57 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:27.165 15:44:57 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:27.165 15:44:57 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:27.165 15:44:57 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:27.165 15:44:57 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:27.166 15:44:57 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:27.166 15:44:57 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:27.166 15:44:57 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:27.166 15:44:57 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:26:27.166 15:44:57 -- nvmf/common.sh@18 -- # NVME_HOSTID=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:26:27.166 15:44:57 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:27.166 15:44:57 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:27.166 15:44:57 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:27.166 15:44:57 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:27.166 15:44:57 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:27.166 15:44:57 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:27.166 15:44:57 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:27.166 15:44:57 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:27.166 15:44:57 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.166 15:44:57 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.166 15:44:57 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.166 15:44:57 -- paths/export.sh@5 -- # export PATH 00:26:27.166 15:44:57 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:27.166 15:44:57 -- nvmf/common.sh@47 -- # : 0 00:26:27.166 15:44:57 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:27.166 15:44:57 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:27.166 15:44:57 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:27.166 15:44:57 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:27.166 15:44:57 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:27.166 15:44:57 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:27.166 15:44:57 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:27.166 15:44:57 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:27.166 15:44:57 -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:27.166 15:44:57 -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:27.166 15:44:57 -- host/perf.sh@15 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:27.166 15:44:57 -- host/perf.sh@17 -- # nvmftestinit 00:26:27.166 15:44:57 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:27.166 15:44:57 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:27.166 15:44:57 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:27.166 15:44:57 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:27.166 15:44:57 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:27.166 15:44:57 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:27.166 15:44:57 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:27.166 15:44:57 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:27.166 15:44:57 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:26:27.166 15:44:57 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:26:27.166 15:44:57 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:26:27.166 15:44:57 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:26:27.166 15:44:57 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:26:27.166 15:44:57 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:26:27.166 15:44:57 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:27.166 15:44:57 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:27.166 15:44:57 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:27.166 15:44:57 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:26:27.166 15:44:57 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:27.166 15:44:57 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:27.166 15:44:57 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:27.166 15:44:57 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:27.166 15:44:57 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:27.166 15:44:57 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:27.166 15:44:57 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:27.166 15:44:57 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:27.166 15:44:57 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:26:27.166 15:44:57 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:26:27.166 Cannot find device "nvmf_tgt_br" 00:26:27.166 15:44:57 -- nvmf/common.sh@155 -- # true 00:26:27.166 15:44:57 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:26:27.166 Cannot find device "nvmf_tgt_br2" 00:26:27.166 15:44:57 -- nvmf/common.sh@156 -- # true 00:26:27.166 15:44:57 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:26:27.166 15:44:57 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:26:27.166 Cannot find device "nvmf_tgt_br" 00:26:27.166 15:44:57 -- nvmf/common.sh@158 -- # true 00:26:27.166 15:44:57 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:26:27.166 Cannot find device "nvmf_tgt_br2" 00:26:27.166 15:44:57 -- nvmf/common.sh@159 -- # true 00:26:27.166 15:44:57 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:26:27.166 15:44:57 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:26:27.166 15:44:57 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:27.166 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:27.166 15:44:57 -- nvmf/common.sh@162 -- # true 00:26:27.166 15:44:57 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:27.166 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:27.166 15:44:57 -- nvmf/common.sh@163 -- # true 00:26:27.166 15:44:57 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:26:27.166 15:44:57 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:27.166 15:44:57 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:27.166 15:44:57 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:27.166 15:44:57 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:27.166 15:44:57 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:27.166 15:44:57 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:27.166 15:44:57 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:27.166 15:44:57 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:27.424 15:44:57 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:26:27.424 15:44:57 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:26:27.424 15:44:57 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:26:27.424 15:44:57 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:26:27.424 15:44:57 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:27.424 15:44:57 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:27.424 15:44:57 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:27.424 15:44:57 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:26:27.424 15:44:57 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:26:27.424 15:44:57 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:26:27.424 15:44:57 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:27.424 15:44:57 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:27.424 15:44:57 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:27.424 15:44:57 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:27.424 15:44:57 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:26:27.424 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:27.424 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.104 ms 00:26:27.424 00:26:27.424 --- 10.0.0.2 ping statistics --- 00:26:27.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:27.424 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:26:27.424 15:44:57 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:26:27.424 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:27.424 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:26:27.424 00:26:27.424 --- 10.0.0.3 ping statistics --- 00:26:27.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:27.424 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:26:27.424 15:44:57 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:27.424 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:27.424 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:26:27.424 00:26:27.424 --- 10.0.0.1 ping statistics --- 00:26:27.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:27.424 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:26:27.424 15:44:57 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:27.424 15:44:57 -- nvmf/common.sh@422 -- # return 0 00:26:27.424 15:44:57 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:26:27.424 15:44:57 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:27.424 15:44:57 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:26:27.424 15:44:57 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:26:27.424 15:44:57 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:27.424 15:44:57 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:26:27.424 15:44:57 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:26:27.424 15:44:57 -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:26:27.424 15:44:57 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:27.424 15:44:57 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:27.424 15:44:57 -- common/autotest_common.sh@10 -- # set +x 00:26:27.424 15:44:57 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:27.424 15:44:57 -- nvmf/common.sh@470 -- # nvmfpid=80573 00:26:27.424 15:44:57 -- nvmf/common.sh@471 -- # waitforlisten 80573 00:26:27.424 15:44:57 -- common/autotest_common.sh@817 -- # '[' -z 80573 ']' 00:26:27.424 15:44:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:27.424 15:44:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:27.424 15:44:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:27.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:27.424 15:44:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:27.424 15:44:57 -- common/autotest_common.sh@10 -- # set +x 00:26:27.424 [2024-04-26 15:44:57.642382] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:26:27.424 [2024-04-26 15:44:57.642476] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:27.682 [2024-04-26 15:44:57.779677] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:27.682 [2024-04-26 15:44:57.903277] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:27.682 [2024-04-26 15:44:57.903333] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:27.682 [2024-04-26 15:44:57.903345] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:27.682 [2024-04-26 15:44:57.903354] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:27.682 [2024-04-26 15:44:57.903361] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:27.682 [2024-04-26 15:44:57.903469] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:27.682 [2024-04-26 15:44:57.903992] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:27.682 [2024-04-26 15:44:57.904523] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:27.682 [2024-04-26 15:44:57.904533] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:28.622 15:44:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:28.622 15:44:58 -- common/autotest_common.sh@850 -- # return 0 00:26:28.622 15:44:58 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:28.622 15:44:58 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:28.622 15:44:58 -- common/autotest_common.sh@10 -- # set +x 00:26:28.622 15:44:58 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:28.622 15:44:58 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:26:28.622 15:44:58 -- host/perf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config 00:26:28.951 15:44:59 -- host/perf.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_get_config bdev 00:26:28.951 15:44:59 -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:26:29.211 15:44:59 -- host/perf.sh@30 -- # local_nvme_trid=0000:00:10.0 00:26:29.211 15:44:59 -- host/perf.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:26:29.469 15:44:59 -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:26:29.469 15:44:59 -- host/perf.sh@33 -- # '[' -n 0000:00:10.0 ']' 00:26:29.469 15:44:59 -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:26:29.469 15:44:59 -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:26:29.469 15:44:59 -- host/perf.sh@42 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:26:29.727 [2024-04-26 15:44:59.794629] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:29.727 15:44:59 -- host/perf.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:29.986 15:45:00 -- host/perf.sh@45 -- # for bdev in $bdevs 00:26:29.986 15:45:00 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:30.245 15:45:00 -- host/perf.sh@45 -- # for bdev in $bdevs 00:26:30.245 15:45:00 -- host/perf.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:26:30.245 15:45:00 -- host/perf.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:30.504 [2024-04-26 15:45:00.731840] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:30.504 15:45:00 -- host/perf.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:30.763 15:45:00 -- host/perf.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:26:30.763 15:45:00 -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:26:30.763 15:45:00 -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:26:30.763 15:45:00 -- host/perf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:00:10.0' 00:26:32.137 Initializing NVMe Controllers 00:26:32.137 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:26:32.137 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:26:32.137 Initialization complete. Launching workers. 00:26:32.137 ======================================================== 00:26:32.137 Latency(us) 00:26:32.137 Device Information : IOPS MiB/s Average min max 00:26:32.137 PCIE (0000:00:10.0) NSID 1 from core 0: 24266.00 94.79 1323.50 290.30 7287.62 00:26:32.137 ======================================================== 00:26:32.137 Total : 24266.00 94.79 1323.50 290.30 7287.62 00:26:32.137 00:26:32.137 15:45:02 -- host/perf.sh@56 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:33.521 Initializing NVMe Controllers 00:26:33.521 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:33.521 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:33.521 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:33.521 Initialization complete. Launching workers. 00:26:33.521 ======================================================== 00:26:33.521 Latency(us) 00:26:33.521 Device Information : IOPS MiB/s Average min max 00:26:33.521 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3595.24 14.04 277.82 112.05 4256.94 00:26:33.521 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 123.63 0.48 8152.26 7034.44 12004.10 00:26:33.521 ======================================================== 00:26:33.521 Total : 3718.87 14.53 539.59 112.05 12004.10 00:26:33.521 00:26:33.521 15:45:03 -- host/perf.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:34.899 Initializing NVMe Controllers 00:26:34.899 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:34.899 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:34.899 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:34.899 Initialization complete. Launching workers. 00:26:34.899 ======================================================== 00:26:34.899 Latency(us) 00:26:34.900 Device Information : IOPS MiB/s Average min max 00:26:34.900 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 9008.88 35.19 3553.58 673.49 7542.98 00:26:34.900 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 2661.97 10.40 12126.46 6031.56 20152.36 00:26:34.900 ======================================================== 00:26:34.900 Total : 11670.85 45.59 5508.94 673.49 20152.36 00:26:34.900 00:26:34.900 15:45:04 -- host/perf.sh@59 -- # [[ '' == \e\8\1\0 ]] 00:26:34.900 15:45:04 -- host/perf.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:37.430 Initializing NVMe Controllers 00:26:37.430 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:37.430 Controller IO queue size 128, less than required. 00:26:37.430 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:37.430 Controller IO queue size 128, less than required. 00:26:37.430 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:37.430 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:37.430 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:37.430 Initialization complete. Launching workers. 00:26:37.430 ======================================================== 00:26:37.430 Latency(us) 00:26:37.431 Device Information : IOPS MiB/s Average min max 00:26:37.431 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1417.43 354.36 92321.88 62275.29 168376.65 00:26:37.431 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 578.27 144.57 228774.95 76038.88 348817.50 00:26:37.431 ======================================================== 00:26:37.431 Total : 1995.69 498.92 131860.12 62275.29 348817.50 00:26:37.431 00:26:37.431 15:45:07 -- host/perf.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:26:37.431 No valid NVMe controllers or AIO or URING devices found 00:26:37.431 Initializing NVMe Controllers 00:26:37.431 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:37.431 Controller IO queue size 128, less than required. 00:26:37.431 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:37.431 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:26:37.431 Controller IO queue size 128, less than required. 00:26:37.431 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:37.431 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 4096. Removing this ns from test 00:26:37.431 WARNING: Some requested NVMe devices were skipped 00:26:37.431 15:45:07 -- host/perf.sh@65 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:26:39.961 Initializing NVMe Controllers 00:26:39.961 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:39.961 Controller IO queue size 128, less than required. 00:26:39.961 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:39.961 Controller IO queue size 128, less than required. 00:26:39.961 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:26:39.961 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:26:39.961 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:26:39.961 Initialization complete. Launching workers. 00:26:39.961 00:26:39.961 ==================== 00:26:39.961 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:26:39.961 TCP transport: 00:26:39.961 polls: 9917 00:26:39.961 idle_polls: 4785 00:26:39.961 sock_completions: 5132 00:26:39.961 nvme_completions: 3351 00:26:39.961 submitted_requests: 5052 00:26:39.961 queued_requests: 1 00:26:39.961 00:26:39.961 ==================== 00:26:39.961 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:26:39.961 TCP transport: 00:26:39.961 polls: 11870 00:26:39.961 idle_polls: 8274 00:26:39.961 sock_completions: 3596 00:26:39.961 nvme_completions: 6555 00:26:39.961 submitted_requests: 9840 00:26:39.961 queued_requests: 1 00:26:39.961 ======================================================== 00:26:39.961 Latency(us) 00:26:39.961 Device Information : IOPS MiB/s Average min max 00:26:39.961 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 837.34 209.34 157688.71 100065.13 238126.16 00:26:39.961 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1638.19 409.55 78319.69 35038.68 132857.72 00:26:39.961 ======================================================== 00:26:39.961 Total : 2475.53 618.88 105166.03 35038.68 238126.16 00:26:39.961 00:26:39.961 15:45:10 -- host/perf.sh@66 -- # sync 00:26:39.961 15:45:10 -- host/perf.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:40.219 15:45:10 -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:26:40.219 15:45:10 -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:26:40.219 15:45:10 -- host/perf.sh@114 -- # nvmftestfini 00:26:40.219 15:45:10 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:40.219 15:45:10 -- nvmf/common.sh@117 -- # sync 00:26:40.219 15:45:10 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:40.219 15:45:10 -- nvmf/common.sh@120 -- # set +e 00:26:40.219 15:45:10 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:40.219 15:45:10 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:40.219 rmmod nvme_tcp 00:26:40.219 rmmod nvme_fabrics 00:26:40.477 rmmod nvme_keyring 00:26:40.477 15:45:10 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:40.477 15:45:10 -- nvmf/common.sh@124 -- # set -e 00:26:40.477 15:45:10 -- nvmf/common.sh@125 -- # return 0 00:26:40.477 15:45:10 -- nvmf/common.sh@478 -- # '[' -n 80573 ']' 00:26:40.477 15:45:10 -- nvmf/common.sh@479 -- # killprocess 80573 00:26:40.477 15:45:10 -- common/autotest_common.sh@936 -- # '[' -z 80573 ']' 00:26:40.477 15:45:10 -- common/autotest_common.sh@940 -- # kill -0 80573 00:26:40.477 15:45:10 -- common/autotest_common.sh@941 -- # uname 00:26:40.477 15:45:10 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:40.477 15:45:10 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 80573 00:26:40.477 killing process with pid 80573 00:26:40.477 15:45:10 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:40.477 15:45:10 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:40.477 15:45:10 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 80573' 00:26:40.477 15:45:10 -- common/autotest_common.sh@955 -- # kill 80573 00:26:40.477 15:45:10 -- common/autotest_common.sh@960 -- # wait 80573 00:26:41.411 15:45:11 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:41.411 15:45:11 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:41.411 15:45:11 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:41.411 15:45:11 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:41.411 15:45:11 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:41.411 15:45:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:41.411 15:45:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:41.411 15:45:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:41.411 15:45:11 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:26:41.411 00:26:41.411 real 0m14.409s 00:26:41.411 user 0m52.521s 00:26:41.411 sys 0m3.540s 00:26:41.411 15:45:11 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:41.411 ************************************ 00:26:41.411 15:45:11 -- common/autotest_common.sh@10 -- # set +x 00:26:41.411 END TEST nvmf_perf 00:26:41.411 ************************************ 00:26:41.411 15:45:11 -- nvmf/nvmf.sh@97 -- # run_test nvmf_fio_host /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:26:41.411 15:45:11 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:41.411 15:45:11 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:41.411 15:45:11 -- common/autotest_common.sh@10 -- # set +x 00:26:41.411 ************************************ 00:26:41.411 START TEST nvmf_fio_host 00:26:41.411 ************************************ 00:26:41.411 15:45:11 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/fio.sh --transport=tcp 00:26:41.669 * Looking for test storage... 00:26:41.669 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:41.669 15:45:11 -- host/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:41.669 15:45:11 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:41.669 15:45:11 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:41.669 15:45:11 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:41.669 15:45:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.669 15:45:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.669 15:45:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.669 15:45:11 -- paths/export.sh@5 -- # export PATH 00:26:41.669 15:45:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.669 15:45:11 -- host/fio.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:41.669 15:45:11 -- nvmf/common.sh@7 -- # uname -s 00:26:41.669 15:45:11 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:41.669 15:45:11 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:41.669 15:45:11 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:41.669 15:45:11 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:41.669 15:45:11 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:41.669 15:45:11 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:41.669 15:45:11 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:41.669 15:45:11 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:41.669 15:45:11 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:41.669 15:45:11 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:41.669 15:45:11 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:26:41.669 15:45:11 -- nvmf/common.sh@18 -- # NVME_HOSTID=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:26:41.669 15:45:11 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:41.669 15:45:11 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:41.669 15:45:11 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:41.669 15:45:11 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:41.669 15:45:11 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:41.669 15:45:11 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:41.669 15:45:11 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:41.669 15:45:11 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:41.669 15:45:11 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.670 15:45:11 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.670 15:45:11 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.670 15:45:11 -- paths/export.sh@5 -- # export PATH 00:26:41.670 15:45:11 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:41.670 15:45:11 -- nvmf/common.sh@47 -- # : 0 00:26:41.670 15:45:11 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:41.670 15:45:11 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:41.670 15:45:11 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:41.670 15:45:11 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:41.670 15:45:11 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:41.670 15:45:11 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:41.670 15:45:11 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:41.670 15:45:11 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:41.670 15:45:11 -- host/fio.sh@12 -- # nvmftestinit 00:26:41.670 15:45:11 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:41.670 15:45:11 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:41.670 15:45:11 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:41.670 15:45:11 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:41.670 15:45:11 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:41.670 15:45:11 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:41.670 15:45:11 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:41.670 15:45:11 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:41.670 15:45:11 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:26:41.670 15:45:11 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:26:41.670 15:45:11 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:26:41.670 15:45:11 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:26:41.670 15:45:11 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:26:41.670 15:45:11 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:26:41.670 15:45:11 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:41.670 15:45:11 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:41.670 15:45:11 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:41.670 15:45:11 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:26:41.670 15:45:11 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:41.670 15:45:11 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:41.670 15:45:11 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:41.670 15:45:11 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:41.670 15:45:11 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:41.670 15:45:11 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:41.670 15:45:11 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:41.670 15:45:11 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:41.670 15:45:11 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:26:41.670 15:45:11 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:26:41.670 Cannot find device "nvmf_tgt_br" 00:26:41.670 15:45:11 -- nvmf/common.sh@155 -- # true 00:26:41.670 15:45:11 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:26:41.670 Cannot find device "nvmf_tgt_br2" 00:26:41.670 15:45:11 -- nvmf/common.sh@156 -- # true 00:26:41.670 15:45:11 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:26:41.670 15:45:11 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:26:41.670 Cannot find device "nvmf_tgt_br" 00:26:41.670 15:45:11 -- nvmf/common.sh@158 -- # true 00:26:41.670 15:45:11 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:26:41.670 Cannot find device "nvmf_tgt_br2" 00:26:41.670 15:45:11 -- nvmf/common.sh@159 -- # true 00:26:41.670 15:45:11 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:26:41.670 15:45:11 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:26:41.670 15:45:11 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:41.670 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:41.670 15:45:11 -- nvmf/common.sh@162 -- # true 00:26:41.670 15:45:11 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:41.670 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:41.670 15:45:11 -- nvmf/common.sh@163 -- # true 00:26:41.670 15:45:11 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:26:41.670 15:45:11 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:41.670 15:45:11 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:41.670 15:45:11 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:41.928 15:45:11 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:41.928 15:45:11 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:41.928 15:45:12 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:41.928 15:45:12 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:41.928 15:45:12 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:41.928 15:45:12 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:26:41.928 15:45:12 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:26:41.928 15:45:12 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:26:41.928 15:45:12 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:26:41.928 15:45:12 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:41.928 15:45:12 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:41.928 15:45:12 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:41.928 15:45:12 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:26:41.928 15:45:12 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:26:41.928 15:45:12 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:26:41.928 15:45:12 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:41.928 15:45:12 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:41.928 15:45:12 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:41.928 15:45:12 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:41.928 15:45:12 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:26:41.928 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:41.929 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.075 ms 00:26:41.929 00:26:41.929 --- 10.0.0.2 ping statistics --- 00:26:41.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:41.929 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:26:41.929 15:45:12 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:26:41.929 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:41.929 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:26:41.929 00:26:41.929 --- 10.0.0.3 ping statistics --- 00:26:41.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:41.929 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:26:41.929 15:45:12 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:41.929 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:41.929 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.024 ms 00:26:41.929 00:26:41.929 --- 10.0.0.1 ping statistics --- 00:26:41.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:41.929 rtt min/avg/max/mdev = 0.024/0.024/0.024/0.000 ms 00:26:41.929 15:45:12 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:41.929 15:45:12 -- nvmf/common.sh@422 -- # return 0 00:26:41.929 15:45:12 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:26:41.929 15:45:12 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:41.929 15:45:12 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:26:41.929 15:45:12 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:26:41.929 15:45:12 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:41.929 15:45:12 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:26:41.929 15:45:12 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:26:41.929 15:45:12 -- host/fio.sh@14 -- # [[ y != y ]] 00:26:41.929 15:45:12 -- host/fio.sh@19 -- # timing_enter start_nvmf_tgt 00:26:41.929 15:45:12 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:41.929 15:45:12 -- common/autotest_common.sh@10 -- # set +x 00:26:41.929 15:45:12 -- host/fio.sh@22 -- # nvmfpid=81057 00:26:41.929 15:45:12 -- host/fio.sh@21 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:41.929 15:45:12 -- host/fio.sh@24 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:41.929 15:45:12 -- host/fio.sh@26 -- # waitforlisten 81057 00:26:41.929 15:45:12 -- common/autotest_common.sh@817 -- # '[' -z 81057 ']' 00:26:41.929 15:45:12 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:41.929 15:45:12 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:41.929 15:45:12 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:41.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:41.929 15:45:12 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:41.929 15:45:12 -- common/autotest_common.sh@10 -- # set +x 00:26:41.929 [2024-04-26 15:45:12.205903] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:26:41.929 [2024-04-26 15:45:12.205998] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:42.186 [2024-04-26 15:45:12.345255] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:42.186 [2024-04-26 15:45:12.477464] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:42.186 [2024-04-26 15:45:12.477536] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:42.186 [2024-04-26 15:45:12.477550] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:42.186 [2024-04-26 15:45:12.477561] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:42.186 [2024-04-26 15:45:12.477571] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:42.186 [2024-04-26 15:45:12.477976] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:42.186 [2024-04-26 15:45:12.478161] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:42.186 [2024-04-26 15:45:12.478422] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:42.443 [2024-04-26 15:45:12.478431] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:26:43.007 15:45:13 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:43.007 15:45:13 -- common/autotest_common.sh@850 -- # return 0 00:26:43.007 15:45:13 -- host/fio.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:43.007 15:45:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:43.007 15:45:13 -- common/autotest_common.sh@10 -- # set +x 00:26:43.007 [2024-04-26 15:45:13.232599] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:43.007 15:45:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:43.007 15:45:13 -- host/fio.sh@28 -- # timing_exit start_nvmf_tgt 00:26:43.007 15:45:13 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:43.007 15:45:13 -- common/autotest_common.sh@10 -- # set +x 00:26:43.007 15:45:13 -- host/fio.sh@30 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:26:43.007 15:45:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:43.007 15:45:13 -- common/autotest_common.sh@10 -- # set +x 00:26:43.265 Malloc1 00:26:43.265 15:45:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:43.265 15:45:13 -- host/fio.sh@31 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:43.265 15:45:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:43.265 15:45:13 -- common/autotest_common.sh@10 -- # set +x 00:26:43.265 15:45:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:43.265 15:45:13 -- host/fio.sh@32 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:26:43.265 15:45:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:43.265 15:45:13 -- common/autotest_common.sh@10 -- # set +x 00:26:43.265 15:45:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:43.265 15:45:13 -- host/fio.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:43.265 15:45:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:43.265 15:45:13 -- common/autotest_common.sh@10 -- # set +x 00:26:43.265 [2024-04-26 15:45:13.336327] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:43.265 15:45:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:43.265 15:45:13 -- host/fio.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:43.265 15:45:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:43.265 15:45:13 -- common/autotest_common.sh@10 -- # set +x 00:26:43.265 15:45:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:43.265 15:45:13 -- host/fio.sh@36 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:26:43.265 15:45:13 -- host/fio.sh@39 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:43.265 15:45:13 -- common/autotest_common.sh@1346 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:43.265 15:45:13 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:26:43.265 15:45:13 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:43.265 15:45:13 -- common/autotest_common.sh@1325 -- # local sanitizers 00:26:43.265 15:45:13 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:26:43.265 15:45:13 -- common/autotest_common.sh@1327 -- # shift 00:26:43.265 15:45:13 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:26:43.265 15:45:13 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:26:43.265 15:45:13 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:26:43.265 15:45:13 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:26:43.265 15:45:13 -- common/autotest_common.sh@1331 -- # grep libasan 00:26:43.265 15:45:13 -- common/autotest_common.sh@1331 -- # asan_lib= 00:26:43.265 15:45:13 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:26:43.265 15:45:13 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:26:43.265 15:45:13 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:26:43.265 15:45:13 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:26:43.265 15:45:13 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:26:43.266 15:45:13 -- common/autotest_common.sh@1331 -- # asan_lib= 00:26:43.266 15:45:13 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:26:43.266 15:45:13 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:26:43.266 15:45:13 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:26:43.266 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:26:43.266 fio-3.35 00:26:43.266 Starting 1 thread 00:26:45.789 00:26:45.789 test: (groupid=0, jobs=1): err= 0: pid=81136: Fri Apr 26 15:45:15 2024 00:26:45.789 read: IOPS=9272, BW=36.2MiB/s (38.0MB/s)(72.7MiB/2007msec) 00:26:45.789 slat (usec): min=2, max=317, avg= 2.53, stdev= 3.05 00:26:45.789 clat (usec): min=3039, max=13125, avg=7179.80, stdev=493.26 00:26:45.789 lat (usec): min=3080, max=13128, avg=7182.32, stdev=493.07 00:26:45.789 clat percentiles (usec): 00:26:45.789 | 1.00th=[ 6128], 5.00th=[ 6456], 10.00th=[ 6652], 20.00th=[ 6783], 00:26:45.789 | 30.00th=[ 6915], 40.00th=[ 7046], 50.00th=[ 7177], 60.00th=[ 7242], 00:26:45.789 | 70.00th=[ 7373], 80.00th=[ 7504], 90.00th=[ 7701], 95.00th=[ 7898], 00:26:45.789 | 99.00th=[ 8356], 99.50th=[ 8717], 99.90th=[10814], 99.95th=[11731], 00:26:45.789 | 99.99th=[13042] 00:26:45.789 bw ( KiB/s): min=36072, max=38080, per=100.00%, avg=37096.00, stdev=823.52, samples=4 00:26:45.789 iops : min= 9018, max= 9520, avg=9274.00, stdev=205.88, samples=4 00:26:45.789 write: IOPS=9280, BW=36.3MiB/s (38.0MB/s)(72.8MiB/2007msec); 0 zone resets 00:26:45.789 slat (usec): min=2, max=232, avg= 2.60, stdev= 1.92 00:26:45.789 clat (usec): min=2308, max=12647, avg=6549.46, stdev=456.36 00:26:45.789 lat (usec): min=2321, max=12649, avg=6552.06, stdev=456.27 00:26:45.789 clat percentiles (usec): 00:26:45.789 | 1.00th=[ 5604], 5.00th=[ 5932], 10.00th=[ 6063], 20.00th=[ 6259], 00:26:45.789 | 30.00th=[ 6325], 40.00th=[ 6456], 50.00th=[ 6521], 60.00th=[ 6652], 00:26:45.789 | 70.00th=[ 6718], 80.00th=[ 6849], 90.00th=[ 7046], 95.00th=[ 7177], 00:26:45.789 | 99.00th=[ 7570], 99.50th=[ 8029], 99.90th=[10683], 99.95th=[11600], 00:26:45.789 | 99.99th=[12649] 00:26:45.789 bw ( KiB/s): min=37000, max=37248, per=100.00%, avg=37122.00, stdev=136.45, samples=4 00:26:45.789 iops : min= 9250, max= 9312, avg=9280.50, stdev=34.11, samples=4 00:26:45.789 lat (msec) : 4=0.09%, 10=99.78%, 20=0.13% 00:26:45.789 cpu : usr=65.50%, sys=24.48%, ctx=10, majf=0, minf=6 00:26:45.789 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:26:45.789 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:45.789 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:45.789 issued rwts: total=18610,18626,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:45.789 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:45.789 00:26:45.789 Run status group 0 (all jobs): 00:26:45.789 READ: bw=36.2MiB/s (38.0MB/s), 36.2MiB/s-36.2MiB/s (38.0MB/s-38.0MB/s), io=72.7MiB (76.2MB), run=2007-2007msec 00:26:45.789 WRITE: bw=36.3MiB/s (38.0MB/s), 36.3MiB/s-36.3MiB/s (38.0MB/s-38.0MB/s), io=72.8MiB (76.3MB), run=2007-2007msec 00:26:45.789 15:45:15 -- host/fio.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:45.789 15:45:15 -- common/autotest_common.sh@1346 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:45.789 15:45:15 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:26:45.789 15:45:15 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:45.789 15:45:15 -- common/autotest_common.sh@1325 -- # local sanitizers 00:26:45.789 15:45:15 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:26:45.789 15:45:15 -- common/autotest_common.sh@1327 -- # shift 00:26:45.789 15:45:15 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:26:45.789 15:45:15 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:26:45.789 15:45:15 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:26:45.789 15:45:15 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:26:45.789 15:45:15 -- common/autotest_common.sh@1331 -- # grep libasan 00:26:45.789 15:45:15 -- common/autotest_common.sh@1331 -- # asan_lib= 00:26:45.789 15:45:15 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:26:45.789 15:45:15 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:26:45.789 15:45:15 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:26:45.789 15:45:15 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:26:45.789 15:45:15 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:26:45.789 15:45:15 -- common/autotest_common.sh@1331 -- # asan_lib= 00:26:45.789 15:45:15 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:26:45.789 15:45:15 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:26:45.789 15:45:15 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:26:45.789 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:26:45.789 fio-3.35 00:26:45.789 Starting 1 thread 00:26:48.326 00:26:48.326 test: (groupid=0, jobs=1): err= 0: pid=81183: Fri Apr 26 15:45:18 2024 00:26:48.326 read: IOPS=8055, BW=126MiB/s (132MB/s)(253MiB/2006msec) 00:26:48.326 slat (usec): min=3, max=121, avg= 3.76, stdev= 1.75 00:26:48.326 clat (usec): min=3023, max=20417, avg=9335.47, stdev=2322.38 00:26:48.326 lat (usec): min=3027, max=20422, avg=9339.23, stdev=2322.50 00:26:48.326 clat percentiles (usec): 00:26:48.326 | 1.00th=[ 4817], 5.00th=[ 5997], 10.00th=[ 6521], 20.00th=[ 7242], 00:26:48.326 | 30.00th=[ 7898], 40.00th=[ 8586], 50.00th=[ 9110], 60.00th=[ 9896], 00:26:48.326 | 70.00th=[10552], 80.00th=[11338], 90.00th=[12125], 95.00th=[13304], 00:26:48.326 | 99.00th=[15664], 99.50th=[16909], 99.90th=[17695], 99.95th=[20055], 00:26:48.326 | 99.99th=[20317] 00:26:48.326 bw ( KiB/s): min=57824, max=76416, per=51.77%, avg=66728.00, stdev=7785.34, samples=4 00:26:48.326 iops : min= 3614, max= 4776, avg=4170.50, stdev=486.58, samples=4 00:26:48.326 write: IOPS=4728, BW=73.9MiB/s (77.5MB/s)(137MiB/1851msec); 0 zone resets 00:26:48.326 slat (usec): min=35, max=1843, avg=39.16, stdev=20.71 00:26:48.326 clat (usec): min=4652, max=20635, avg=11276.39, stdev=1972.81 00:26:48.326 lat (usec): min=4689, max=20681, avg=11315.55, stdev=1974.09 00:26:48.326 clat percentiles (usec): 00:26:48.326 | 1.00th=[ 7570], 5.00th=[ 8586], 10.00th=[ 8979], 20.00th=[ 9503], 00:26:48.326 | 30.00th=[10028], 40.00th=[10552], 50.00th=[11076], 60.00th=[11469], 00:26:48.326 | 70.00th=[12125], 80.00th=[12911], 90.00th=[13960], 95.00th=[14877], 00:26:48.326 | 99.00th=[16712], 99.50th=[17433], 99.90th=[19268], 99.95th=[19268], 00:26:48.326 | 99.99th=[20579] 00:26:48.326 bw ( KiB/s): min=60608, max=79776, per=91.94%, avg=69552.00, stdev=8136.67, samples=4 00:26:48.326 iops : min= 3788, max= 4986, avg=4347.00, stdev=508.54, samples=4 00:26:48.326 lat (msec) : 4=0.11%, 10=50.12%, 20=49.73%, 50=0.04% 00:26:48.326 cpu : usr=73.57%, sys=16.81%, ctx=6, majf=0, minf=13 00:26:48.326 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.7% 00:26:48.326 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:48.326 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:26:48.326 issued rwts: total=16160,8752,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:48.326 latency : target=0, window=0, percentile=100.00%, depth=128 00:26:48.326 00:26:48.326 Run status group 0 (all jobs): 00:26:48.326 READ: bw=126MiB/s (132MB/s), 126MiB/s-126MiB/s (132MB/s-132MB/s), io=253MiB (265MB), run=2006-2006msec 00:26:48.326 WRITE: bw=73.9MiB/s (77.5MB/s), 73.9MiB/s-73.9MiB/s (77.5MB/s-77.5MB/s), io=137MiB (143MB), run=1851-1851msec 00:26:48.326 15:45:18 -- host/fio.sh@45 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:48.326 15:45:18 -- common/autotest_common.sh@549 -- # xtrace_disable 00:26:48.326 15:45:18 -- common/autotest_common.sh@10 -- # set +x 00:26:48.326 15:45:18 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:26:48.326 15:45:18 -- host/fio.sh@47 -- # '[' 0 -eq 1 ']' 00:26:48.326 15:45:18 -- host/fio.sh@81 -- # trap - SIGINT SIGTERM EXIT 00:26:48.326 15:45:18 -- host/fio.sh@83 -- # rm -f ./local-test-0-verify.state 00:26:48.326 15:45:18 -- host/fio.sh@84 -- # nvmftestfini 00:26:48.326 15:45:18 -- nvmf/common.sh@477 -- # nvmfcleanup 00:26:48.326 15:45:18 -- nvmf/common.sh@117 -- # sync 00:26:48.326 15:45:18 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:48.326 15:45:18 -- nvmf/common.sh@120 -- # set +e 00:26:48.326 15:45:18 -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:48.326 15:45:18 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:48.326 rmmod nvme_tcp 00:26:48.326 rmmod nvme_fabrics 00:26:48.326 rmmod nvme_keyring 00:26:48.326 15:45:18 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:48.326 15:45:18 -- nvmf/common.sh@124 -- # set -e 00:26:48.326 15:45:18 -- nvmf/common.sh@125 -- # return 0 00:26:48.326 15:45:18 -- nvmf/common.sh@478 -- # '[' -n 81057 ']' 00:26:48.326 15:45:18 -- nvmf/common.sh@479 -- # killprocess 81057 00:26:48.326 15:45:18 -- common/autotest_common.sh@936 -- # '[' -z 81057 ']' 00:26:48.326 15:45:18 -- common/autotest_common.sh@940 -- # kill -0 81057 00:26:48.326 15:45:18 -- common/autotest_common.sh@941 -- # uname 00:26:48.326 15:45:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:26:48.326 15:45:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81057 00:26:48.326 killing process with pid 81057 00:26:48.326 15:45:18 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:26:48.326 15:45:18 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:26:48.327 15:45:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81057' 00:26:48.327 15:45:18 -- common/autotest_common.sh@955 -- # kill 81057 00:26:48.327 15:45:18 -- common/autotest_common.sh@960 -- # wait 81057 00:26:48.592 15:45:18 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:26:48.592 15:45:18 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:26:48.592 15:45:18 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:26:48.592 15:45:18 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:48.592 15:45:18 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:48.592 15:45:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:48.592 15:45:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:48.592 15:45:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:48.592 15:45:18 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:26:48.592 00:26:48.592 real 0m7.101s 00:26:48.592 user 0m27.375s 00:26:48.592 sys 0m2.009s 00:26:48.592 15:45:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:26:48.592 15:45:18 -- common/autotest_common.sh@10 -- # set +x 00:26:48.592 ************************************ 00:26:48.592 END TEST nvmf_fio_host 00:26:48.592 ************************************ 00:26:48.592 15:45:18 -- nvmf/nvmf.sh@98 -- # run_test nvmf_failover /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:26:48.592 15:45:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:26:48.592 15:45:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:26:48.592 15:45:18 -- common/autotest_common.sh@10 -- # set +x 00:26:48.592 ************************************ 00:26:48.592 START TEST nvmf_failover 00:26:48.592 ************************************ 00:26:48.592 15:45:18 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/failover.sh --transport=tcp 00:26:48.852 * Looking for test storage... 00:26:48.852 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:26:48.852 15:45:18 -- host/failover.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:26:48.852 15:45:18 -- nvmf/common.sh@7 -- # uname -s 00:26:48.852 15:45:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:48.852 15:45:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:48.852 15:45:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:48.852 15:45:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:48.852 15:45:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:48.852 15:45:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:48.852 15:45:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:48.852 15:45:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:48.852 15:45:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:48.852 15:45:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:48.852 15:45:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:26:48.852 15:45:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:26:48.852 15:45:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:48.852 15:45:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:48.852 15:45:18 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:26:48.852 15:45:18 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:48.852 15:45:18 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:26:48.852 15:45:18 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:48.852 15:45:18 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:48.852 15:45:18 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:48.852 15:45:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.852 15:45:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.852 15:45:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.852 15:45:18 -- paths/export.sh@5 -- # export PATH 00:26:48.852 15:45:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:48.852 15:45:18 -- nvmf/common.sh@47 -- # : 0 00:26:48.852 15:45:18 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:48.852 15:45:18 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:48.852 15:45:18 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:48.852 15:45:18 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:48.852 15:45:18 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:48.852 15:45:18 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:48.852 15:45:18 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:48.852 15:45:18 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:48.852 15:45:18 -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:48.852 15:45:18 -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:48.852 15:45:18 -- host/failover.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:26:48.852 15:45:18 -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:48.852 15:45:18 -- host/failover.sh@18 -- # nvmftestinit 00:26:48.852 15:45:18 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:26:48.852 15:45:18 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:48.852 15:45:18 -- nvmf/common.sh@437 -- # prepare_net_devs 00:26:48.852 15:45:18 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:26:48.852 15:45:18 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:26:48.852 15:45:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:48.852 15:45:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:48.852 15:45:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:48.852 15:45:18 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:26:48.852 15:45:18 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:26:48.852 15:45:18 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:26:48.852 15:45:18 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:26:48.852 15:45:18 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:26:48.852 15:45:18 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:26:48.852 15:45:18 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:48.852 15:45:18 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:48.852 15:45:18 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:26:48.852 15:45:18 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:26:48.852 15:45:18 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:26:48.852 15:45:18 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:26:48.852 15:45:18 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:26:48.852 15:45:18 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:48.852 15:45:18 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:26:48.852 15:45:18 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:26:48.852 15:45:18 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:26:48.852 15:45:18 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:26:48.852 15:45:18 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:26:48.852 15:45:18 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:26:48.852 Cannot find device "nvmf_tgt_br" 00:26:48.852 15:45:19 -- nvmf/common.sh@155 -- # true 00:26:48.852 15:45:19 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:26:48.852 Cannot find device "nvmf_tgt_br2" 00:26:48.852 15:45:19 -- nvmf/common.sh@156 -- # true 00:26:48.852 15:45:19 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:26:48.852 15:45:19 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:26:48.852 Cannot find device "nvmf_tgt_br" 00:26:48.852 15:45:19 -- nvmf/common.sh@158 -- # true 00:26:48.852 15:45:19 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:26:48.852 Cannot find device "nvmf_tgt_br2" 00:26:48.852 15:45:19 -- nvmf/common.sh@159 -- # true 00:26:48.852 15:45:19 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:26:48.852 15:45:19 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:26:48.852 15:45:19 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:26:48.852 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:48.852 15:45:19 -- nvmf/common.sh@162 -- # true 00:26:48.852 15:45:19 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:26:48.852 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:26:48.852 15:45:19 -- nvmf/common.sh@163 -- # true 00:26:48.852 15:45:19 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:26:48.853 15:45:19 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:26:48.853 15:45:19 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:26:49.112 15:45:19 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:26:49.112 15:45:19 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:26:49.112 15:45:19 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:26:49.112 15:45:19 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:26:49.112 15:45:19 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:26:49.112 15:45:19 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:26:49.112 15:45:19 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:26:49.112 15:45:19 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:26:49.112 15:45:19 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:26:49.112 15:45:19 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:26:49.112 15:45:19 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:26:49.112 15:45:19 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:26:49.112 15:45:19 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:26:49.112 15:45:19 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:26:49.112 15:45:19 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:26:49.112 15:45:19 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:26:49.112 15:45:19 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:26:49.112 15:45:19 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:26:49.112 15:45:19 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:26:49.112 15:45:19 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:26:49.112 15:45:19 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:26:49.112 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:49.112 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:26:49.112 00:26:49.112 --- 10.0.0.2 ping statistics --- 00:26:49.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:49.112 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:26:49.112 15:45:19 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:26:49.112 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:26:49.112 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.043 ms 00:26:49.112 00:26:49.112 --- 10.0.0.3 ping statistics --- 00:26:49.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:49.112 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:26:49.112 15:45:19 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:26:49.112 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:49.112 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.035 ms 00:26:49.112 00:26:49.112 --- 10.0.0.1 ping statistics --- 00:26:49.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:49.112 rtt min/avg/max/mdev = 0.035/0.035/0.035/0.000 ms 00:26:49.112 15:45:19 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:49.112 15:45:19 -- nvmf/common.sh@422 -- # return 0 00:26:49.112 15:45:19 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:26:49.112 15:45:19 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:49.112 15:45:19 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:26:49.112 15:45:19 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:26:49.112 15:45:19 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:49.112 15:45:19 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:26:49.112 15:45:19 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:26:49.112 15:45:19 -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:26:49.112 15:45:19 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:26:49.112 15:45:19 -- common/autotest_common.sh@710 -- # xtrace_disable 00:26:49.112 15:45:19 -- common/autotest_common.sh@10 -- # set +x 00:26:49.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:49.112 15:45:19 -- nvmf/common.sh@470 -- # nvmfpid=81395 00:26:49.112 15:45:19 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:49.112 15:45:19 -- nvmf/common.sh@471 -- # waitforlisten 81395 00:26:49.112 15:45:19 -- common/autotest_common.sh@817 -- # '[' -z 81395 ']' 00:26:49.112 15:45:19 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:49.112 15:45:19 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:49.112 15:45:19 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:49.112 15:45:19 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:49.112 15:45:19 -- common/autotest_common.sh@10 -- # set +x 00:26:49.112 [2024-04-26 15:45:19.372898] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:26:49.112 [2024-04-26 15:45:19.372970] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:49.371 [2024-04-26 15:45:19.510251] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:49.371 [2024-04-26 15:45:19.619857] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:49.371 [2024-04-26 15:45:19.620072] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:49.371 [2024-04-26 15:45:19.620444] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:49.371 [2024-04-26 15:45:19.620507] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:49.371 [2024-04-26 15:45:19.620710] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:49.371 [2024-04-26 15:45:19.620861] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:26:49.371 [2024-04-26 15:45:19.621562] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:26:49.371 [2024-04-26 15:45:19.621606] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:26:50.317 15:45:20 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:50.317 15:45:20 -- common/autotest_common.sh@850 -- # return 0 00:26:50.317 15:45:20 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:26:50.317 15:45:20 -- common/autotest_common.sh@716 -- # xtrace_disable 00:26:50.317 15:45:20 -- common/autotest_common.sh@10 -- # set +x 00:26:50.317 15:45:20 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:50.317 15:45:20 -- host/failover.sh@22 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:50.576 [2024-04-26 15:45:20.664757] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:50.576 15:45:20 -- host/failover.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:50.834 Malloc0 00:26:50.834 15:45:20 -- host/failover.sh@24 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:51.093 15:45:21 -- host/failover.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:51.351 15:45:21 -- host/failover.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:51.610 [2024-04-26 15:45:21.674235] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:51.610 15:45:21 -- host/failover.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:51.870 [2024-04-26 15:45:21.906339] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:51.870 15:45:21 -- host/failover.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:51.870 [2024-04-26 15:45:22.142535] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:52.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:52.131 15:45:22 -- host/failover.sh@31 -- # bdevperf_pid=81513 00:26:52.131 15:45:22 -- host/failover.sh@30 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:26:52.131 15:45:22 -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:52.131 15:45:22 -- host/failover.sh@34 -- # waitforlisten 81513 /var/tmp/bdevperf.sock 00:26:52.131 15:45:22 -- common/autotest_common.sh@817 -- # '[' -z 81513 ']' 00:26:52.131 15:45:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:52.131 15:45:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:26:52.131 15:45:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:52.131 15:45:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:26:52.131 15:45:22 -- common/autotest_common.sh@10 -- # set +x 00:26:53.067 15:45:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:26:53.067 15:45:23 -- common/autotest_common.sh@850 -- # return 0 00:26:53.067 15:45:23 -- host/failover.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:53.325 NVMe0n1 00:26:53.325 15:45:23 -- host/failover.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:53.583 00:26:53.583 15:45:23 -- host/failover.sh@39 -- # run_test_pid=81561 00:26:53.583 15:45:23 -- host/failover.sh@41 -- # sleep 1 00:26:53.583 15:45:23 -- host/failover.sh@38 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:54.960 15:45:24 -- host/failover.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:54.960 [2024-04-26 15:45:25.113881] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308e70 is same with the state(5) to be set 00:26:54.960 [2024-04-26 15:45:25.113940] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308e70 is same with the state(5) to be set 00:26:54.960 [2024-04-26 15:45:25.113953] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308e70 is same with the state(5) to be set 00:26:54.960 [2024-04-26 15:45:25.113962] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308e70 is same with the state(5) to be set 00:26:54.960 [2024-04-26 15:45:25.113971] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308e70 is same with the state(5) to be set 00:26:54.960 [2024-04-26 15:45:25.113981] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308e70 is same with the state(5) to be set 00:26:54.960 [2024-04-26 15:45:25.113990] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308e70 is same with the state(5) to be set 00:26:54.960 [2024-04-26 15:45:25.113999] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308e70 is same with the state(5) to be set 00:26:54.960 [2024-04-26 15:45:25.114007] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308e70 is same with the state(5) to be set 00:26:54.960 [2024-04-26 15:45:25.114016] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308e70 is same with the state(5) to be set 00:26:54.960 [2024-04-26 15:45:25.114025] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308e70 is same with the state(5) to be set 00:26:54.960 [2024-04-26 15:45:25.114033] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308e70 is same with the state(5) to be set 00:26:54.960 [2024-04-26 15:45:25.114041] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308e70 is same with the state(5) to be set 00:26:54.960 [2024-04-26 15:45:25.114050] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308e70 is same with the state(5) to be set 00:26:54.960 [2024-04-26 15:45:25.114058] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308e70 is same with the state(5) to be set 00:26:54.960 [2024-04-26 15:45:25.114066] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308e70 is same with the state(5) to be set 00:26:54.960 [2024-04-26 15:45:25.114075] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308e70 is same with the state(5) to be set 00:26:54.960 [2024-04-26 15:45:25.114083] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308e70 is same with the state(5) to be set 00:26:54.960 [2024-04-26 15:45:25.114092] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308e70 is same with the state(5) to be set 00:26:54.960 [2024-04-26 15:45:25.114100] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308e70 is same with the state(5) to be set 00:26:54.960 [2024-04-26 15:45:25.114124] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308e70 is same with the state(5) to be set 00:26:54.960 [2024-04-26 15:45:25.114153] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308e70 is same with the state(5) to be set 00:26:54.960 [2024-04-26 15:45:25.114165] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308e70 is same with the state(5) to be set 00:26:54.960 [2024-04-26 15:45:25.114173] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308e70 is same with the state(5) to be set 00:26:54.960 [2024-04-26 15:45:25.114182] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308e70 is same with the state(5) to be set 00:26:54.960 [2024-04-26 15:45:25.114191] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308e70 is same with the state(5) to be set 00:26:54.960 [2024-04-26 15:45:25.114199] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308e70 is same with the state(5) to be set 00:26:54.960 [2024-04-26 15:45:25.114207] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308e70 is same with the state(5) to be set 00:26:54.960 [2024-04-26 15:45:25.114217] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308e70 is same with the state(5) to be set 00:26:54.960 [2024-04-26 15:45:25.114226] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308e70 is same with the state(5) to be set 00:26:54.960 [2024-04-26 15:45:25.114234] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308e70 is same with the state(5) to be set 00:26:54.960 [2024-04-26 15:45:25.114242] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308e70 is same with the state(5) to be set 00:26:54.960 [2024-04-26 15:45:25.114251] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308e70 is same with the state(5) to be set 00:26:54.960 [2024-04-26 15:45:25.114259] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308e70 is same with the state(5) to be set 00:26:54.960 [2024-04-26 15:45:25.114269] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308e70 is same with the state(5) to be set 00:26:54.960 [2024-04-26 15:45:25.114278] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308e70 is same with the state(5) to be set 00:26:54.960 [2024-04-26 15:45:25.114286] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308e70 is same with the state(5) to be set 00:26:54.960 [2024-04-26 15:45:25.114294] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1308e70 is same with the state(5) to be set 00:26:54.960 15:45:25 -- host/failover.sh@45 -- # sleep 3 00:26:58.313 15:45:28 -- host/failover.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:58.313 00:26:58.313 15:45:28 -- host/failover.sh@48 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:58.571 [2024-04-26 15:45:28.785673] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1309680 is same with the state(5) to be set 00:26:58.571 [2024-04-26 15:45:28.785721] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1309680 is same with the state(5) to be set 00:26:58.571 [2024-04-26 15:45:28.785750] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1309680 is same with the state(5) to be set 00:26:58.571 [2024-04-26 15:45:28.785759] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1309680 is same with the state(5) to be set 00:26:58.571 [2024-04-26 15:45:28.785768] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1309680 is same with the state(5) to be set 00:26:58.571 [2024-04-26 15:45:28.785777] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1309680 is same with the state(5) to be set 00:26:58.571 [2024-04-26 15:45:28.785786] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1309680 is same with the state(5) to be set 00:26:58.571 [2024-04-26 15:45:28.785794] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1309680 is same with the state(5) to be set 00:26:58.571 [2024-04-26 15:45:28.785803] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1309680 is same with the state(5) to be set 00:26:58.571 [2024-04-26 15:45:28.785812] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1309680 is same with the state(5) to be set 00:26:58.571 [2024-04-26 15:45:28.785820] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1309680 is same with the state(5) to be set 00:26:58.571 [2024-04-26 15:45:28.785829] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1309680 is same with the state(5) to be set 00:26:58.571 [2024-04-26 15:45:28.785837] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1309680 is same with the state(5) to be set 00:26:58.571 [2024-04-26 15:45:28.785845] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1309680 is same with the state(5) to be set 00:26:58.571 15:45:28 -- host/failover.sh@50 -- # sleep 3 00:27:01.854 15:45:31 -- host/failover.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:01.854 [2024-04-26 15:45:32.069691] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:01.854 15:45:32 -- host/failover.sh@55 -- # sleep 1 00:27:02.851 15:45:33 -- host/failover.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:27:03.109 15:45:33 -- host/failover.sh@59 -- # wait 81561 00:27:09.672 0 00:27:09.672 15:45:38 -- host/failover.sh@61 -- # killprocess 81513 00:27:09.672 15:45:38 -- common/autotest_common.sh@936 -- # '[' -z 81513 ']' 00:27:09.672 15:45:38 -- common/autotest_common.sh@940 -- # kill -0 81513 00:27:09.672 15:45:38 -- common/autotest_common.sh@941 -- # uname 00:27:09.672 15:45:38 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:09.672 15:45:38 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81513 00:27:09.672 killing process with pid 81513 00:27:09.672 15:45:39 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:09.672 15:45:39 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:09.672 15:45:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81513' 00:27:09.672 15:45:39 -- common/autotest_common.sh@955 -- # kill 81513 00:27:09.672 15:45:39 -- common/autotest_common.sh@960 -- # wait 81513 00:27:09.672 15:45:39 -- host/failover.sh@63 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:27:09.672 [2024-04-26 15:45:22.218458] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:27:09.672 [2024-04-26 15:45:22.218590] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81513 ] 00:27:09.672 [2024-04-26 15:45:22.358065] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:09.672 [2024-04-26 15:45:22.476142] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:09.672 Running I/O for 15 seconds... 00:27:09.672 [2024-04-26 15:45:25.114896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:85400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.672 [2024-04-26 15:45:25.114941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.672 [2024-04-26 15:45:25.114969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:85408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.672 [2024-04-26 15:45:25.114987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.672 [2024-04-26 15:45:25.115003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:85416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.672 [2024-04-26 15:45:25.115017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.672 [2024-04-26 15:45:25.115033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:85424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.672 [2024-04-26 15:45:25.115047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.672 [2024-04-26 15:45:25.115063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:85432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.672 [2024-04-26 15:45:25.115077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.672 [2024-04-26 15:45:25.115092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:85440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.672 [2024-04-26 15:45:25.115106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.672 [2024-04-26 15:45:25.115122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:85448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.672 [2024-04-26 15:45:25.115147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.672 [2024-04-26 15:45:25.115167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:85456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.672 [2024-04-26 15:45:25.115181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.672 [2024-04-26 15:45:25.115197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:85464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.672 [2024-04-26 15:45:25.115211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.672 [2024-04-26 15:45:25.115226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:85472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.672 [2024-04-26 15:45:25.115240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.672 [2024-04-26 15:45:25.115256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:85480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.672 [2024-04-26 15:45:25.115270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.672 [2024-04-26 15:45:25.115313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:85488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.672 [2024-04-26 15:45:25.115329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.672 [2024-04-26 15:45:25.115344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:85496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.672 [2024-04-26 15:45:25.115358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.672 [2024-04-26 15:45:25.115374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:85504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.672 [2024-04-26 15:45:25.115388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.672 [2024-04-26 15:45:25.115404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:85512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.672 [2024-04-26 15:45:25.115418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.672 [2024-04-26 15:45:25.115433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:85520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.672 [2024-04-26 15:45:25.115454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.672 [2024-04-26 15:45:25.115471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:85528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.672 [2024-04-26 15:45:25.115485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.672 [2024-04-26 15:45:25.115500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:85536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.672 [2024-04-26 15:45:25.115514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.672 [2024-04-26 15:45:25.115530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:85544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.672 [2024-04-26 15:45:25.115544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.672 [2024-04-26 15:45:25.115559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:85552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.672 [2024-04-26 15:45:25.115573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.672 [2024-04-26 15:45:25.115588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:85560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.672 [2024-04-26 15:45:25.115603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.672 [2024-04-26 15:45:25.115618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:85568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.672 [2024-04-26 15:45:25.115632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.672 [2024-04-26 15:45:25.115648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:85576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.672 [2024-04-26 15:45:25.115662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.672 [2024-04-26 15:45:25.115678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:85584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.672 [2024-04-26 15:45:25.115700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.672 [2024-04-26 15:45:25.115716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:85592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.672 [2024-04-26 15:45:25.115731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.672 [2024-04-26 15:45:25.115746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:85600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.672 [2024-04-26 15:45:25.115760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.672 [2024-04-26 15:45:25.115776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:85608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.672 [2024-04-26 15:45:25.115791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.672 [2024-04-26 15:45:25.115806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:85616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.672 [2024-04-26 15:45:25.115820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.672 [2024-04-26 15:45:25.115835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:85624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.672 [2024-04-26 15:45:25.115849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.672 [2024-04-26 15:45:25.115865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:85632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.672 [2024-04-26 15:45:25.115879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.672 [2024-04-26 15:45:25.115895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:85640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.673 [2024-04-26 15:45:25.115912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.673 [2024-04-26 15:45:25.115927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:85648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.673 [2024-04-26 15:45:25.115947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.673 [2024-04-26 15:45:25.115964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:85656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.673 [2024-04-26 15:45:25.115978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.673 [2024-04-26 15:45:25.115993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:85664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.673 [2024-04-26 15:45:25.116008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.673 [2024-04-26 15:45:25.116023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:85672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.673 [2024-04-26 15:45:25.116038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.673 [2024-04-26 15:45:25.116053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:85680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.673 [2024-04-26 15:45:25.116068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.673 [2024-04-26 15:45:25.116090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:85688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.673 [2024-04-26 15:45:25.116106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.673 [2024-04-26 15:45:25.116122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:85696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.673 [2024-04-26 15:45:25.116148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.673 [2024-04-26 15:45:25.116167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:85704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.673 [2024-04-26 15:45:25.116181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.673 [2024-04-26 15:45:25.116197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:86032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.673 [2024-04-26 15:45:25.116211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.673 [2024-04-26 15:45:25.116227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:86040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.673 [2024-04-26 15:45:25.116241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.673 [2024-04-26 15:45:25.116256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:86048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.673 [2024-04-26 15:45:25.116270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.673 [2024-04-26 15:45:25.116286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:86056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.673 [2024-04-26 15:45:25.116300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.673 [2024-04-26 15:45:25.116315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:86064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.673 [2024-04-26 15:45:25.116330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.673 [2024-04-26 15:45:25.116356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:86072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.673 [2024-04-26 15:45:25.116374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.673 [2024-04-26 15:45:25.116390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:86080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.673 [2024-04-26 15:45:25.116404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.673 [2024-04-26 15:45:25.116419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:86088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.673 [2024-04-26 15:45:25.116434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.673 [2024-04-26 15:45:25.116449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:86096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.673 [2024-04-26 15:45:25.116468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.673 [2024-04-26 15:45:25.116483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:86104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.673 [2024-04-26 15:45:25.116498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.673 [2024-04-26 15:45:25.116520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:86112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.673 [2024-04-26 15:45:25.116536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.673 [2024-04-26 15:45:25.116551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:86120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.673 [2024-04-26 15:45:25.116565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.673 [2024-04-26 15:45:25.116580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:86128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.673 [2024-04-26 15:45:25.116595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.673 [2024-04-26 15:45:25.116610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:86136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.673 [2024-04-26 15:45:25.116624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.673 [2024-04-26 15:45:25.116639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:86144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.673 [2024-04-26 15:45:25.116653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.673 [2024-04-26 15:45:25.116669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:86152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.673 [2024-04-26 15:45:25.116683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.673 [2024-04-26 15:45:25.116698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:86160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.673 [2024-04-26 15:45:25.116713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.673 [2024-04-26 15:45:25.116728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:86168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.673 [2024-04-26 15:45:25.116742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.673 [2024-04-26 15:45:25.116757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:86176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.673 [2024-04-26 15:45:25.116772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.673 [2024-04-26 15:45:25.116787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:86184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.673 [2024-04-26 15:45:25.116801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.673 [2024-04-26 15:45:25.116816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:86192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.673 [2024-04-26 15:45:25.116831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.673 [2024-04-26 15:45:25.116847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:86200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.673 [2024-04-26 15:45:25.116861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.673 [2024-04-26 15:45:25.116877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:86208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.673 [2024-04-26 15:45:25.116899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.673 [2024-04-26 15:45:25.116916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:86216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.673 [2024-04-26 15:45:25.116930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.673 [2024-04-26 15:45:25.116946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:86224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.673 [2024-04-26 15:45:25.116964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.673 [2024-04-26 15:45:25.116980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:85712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.673 [2024-04-26 15:45:25.116995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.673 [2024-04-26 15:45:25.117010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:85720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.673 [2024-04-26 15:45:25.117025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.673 [2024-04-26 15:45:25.117040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:85728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.673 [2024-04-26 15:45:25.117055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.673 [2024-04-26 15:45:25.117070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:85736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.673 [2024-04-26 15:45:25.117084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.673 [2024-04-26 15:45:25.117099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:85744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.673 [2024-04-26 15:45:25.117114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.673 [2024-04-26 15:45:25.117129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:85752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.674 [2024-04-26 15:45:25.117154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.674 [2024-04-26 15:45:25.117171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:85760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.674 [2024-04-26 15:45:25.117186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.674 [2024-04-26 15:45:25.117201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:85768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.674 [2024-04-26 15:45:25.117215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.674 [2024-04-26 15:45:25.117231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:86232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.674 [2024-04-26 15:45:25.117244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.674 [2024-04-26 15:45:25.117260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:86240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.674 [2024-04-26 15:45:25.117274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.674 [2024-04-26 15:45:25.117297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:86248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.674 [2024-04-26 15:45:25.117319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.674 [2024-04-26 15:45:25.117334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:86256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.674 [2024-04-26 15:45:25.117349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.674 [2024-04-26 15:45:25.117364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:86264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.674 [2024-04-26 15:45:25.117379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.674 [2024-04-26 15:45:25.117394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:86272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.674 [2024-04-26 15:45:25.117408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.674 [2024-04-26 15:45:25.117424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:86280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.674 [2024-04-26 15:45:25.117438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.674 [2024-04-26 15:45:25.117454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:86288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.674 [2024-04-26 15:45:25.117472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.674 [2024-04-26 15:45:25.117488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:86296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.674 [2024-04-26 15:45:25.117502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.674 [2024-04-26 15:45:25.117517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:86304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.674 [2024-04-26 15:45:25.117532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.674 [2024-04-26 15:45:25.117547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:86312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.674 [2024-04-26 15:45:25.117561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.674 [2024-04-26 15:45:25.117576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:86320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.674 [2024-04-26 15:45:25.117591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.674 [2024-04-26 15:45:25.117606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:86328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.674 [2024-04-26 15:45:25.117620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.674 [2024-04-26 15:45:25.117636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:86336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.674 [2024-04-26 15:45:25.117650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.674 [2024-04-26 15:45:25.117665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:86344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.674 [2024-04-26 15:45:25.117686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.674 [2024-04-26 15:45:25.117703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:86352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.674 [2024-04-26 15:45:25.117717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.674 [2024-04-26 15:45:25.117733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:86360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.674 [2024-04-26 15:45:25.117747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.674 [2024-04-26 15:45:25.117763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:86368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.674 [2024-04-26 15:45:25.117777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.674 [2024-04-26 15:45:25.117792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:86376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.674 [2024-04-26 15:45:25.117806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.674 [2024-04-26 15:45:25.117821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:86384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.674 [2024-04-26 15:45:25.117845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.674 [2024-04-26 15:45:25.117860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:86392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.674 [2024-04-26 15:45:25.117874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.674 [2024-04-26 15:45:25.117890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:86400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.674 [2024-04-26 15:45:25.117904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.674 [2024-04-26 15:45:25.117919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:86408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.674 [2024-04-26 15:45:25.117934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.674 [2024-04-26 15:45:25.117949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:86416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.674 [2024-04-26 15:45:25.117967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.674 [2024-04-26 15:45:25.117982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:85776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.674 [2024-04-26 15:45:25.117996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.674 [2024-04-26 15:45:25.118012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:85784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.674 [2024-04-26 15:45:25.118025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.674 [2024-04-26 15:45:25.118041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:85792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.674 [2024-04-26 15:45:25.118055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.674 [2024-04-26 15:45:25.118070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:85800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.674 [2024-04-26 15:45:25.118091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.674 [2024-04-26 15:45:25.118107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:85808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.674 [2024-04-26 15:45:25.118121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.674 [2024-04-26 15:45:25.118148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:85816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.674 [2024-04-26 15:45:25.118165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.674 [2024-04-26 15:45:25.118180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:85824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.674 [2024-04-26 15:45:25.118195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.674 [2024-04-26 15:45:25.118210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:85832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.674 [2024-04-26 15:45:25.118224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.674 [2024-04-26 15:45:25.118239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:85840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.674 [2024-04-26 15:45:25.118253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.674 [2024-04-26 15:45:25.118269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:85848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.674 [2024-04-26 15:45:25.118283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.674 [2024-04-26 15:45:25.118298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:85856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.674 [2024-04-26 15:45:25.118312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.674 [2024-04-26 15:45:25.118327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:85864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.674 [2024-04-26 15:45:25.118342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.674 [2024-04-26 15:45:25.118361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:85872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.675 [2024-04-26 15:45:25.118376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.675 [2024-04-26 15:45:25.118391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:85880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.675 [2024-04-26 15:45:25.118405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.675 [2024-04-26 15:45:25.118421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:85888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.675 [2024-04-26 15:45:25.118435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.675 [2024-04-26 15:45:25.118450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:85896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.675 [2024-04-26 15:45:25.118468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.675 [2024-04-26 15:45:25.118490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:85904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.675 [2024-04-26 15:45:25.118505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.675 [2024-04-26 15:45:25.118521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:85912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.675 [2024-04-26 15:45:25.118535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.675 [2024-04-26 15:45:25.118550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:85920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.675 [2024-04-26 15:45:25.118565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.675 [2024-04-26 15:45:25.118580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:85928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.675 [2024-04-26 15:45:25.118594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.675 [2024-04-26 15:45:25.118609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:85936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.675 [2024-04-26 15:45:25.118623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.675 [2024-04-26 15:45:25.118638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:85944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.675 [2024-04-26 15:45:25.118652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.675 [2024-04-26 15:45:25.118667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:85952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.675 [2024-04-26 15:45:25.118682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.675 [2024-04-26 15:45:25.118697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:85960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.675 [2024-04-26 15:45:25.118711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.675 [2024-04-26 15:45:25.118726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:85968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.675 [2024-04-26 15:45:25.118740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.675 [2024-04-26 15:45:25.118756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:85976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.675 [2024-04-26 15:45:25.118770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.675 [2024-04-26 15:45:25.118785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:85984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.675 [2024-04-26 15:45:25.118799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.675 [2024-04-26 15:45:25.118814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:85992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.675 [2024-04-26 15:45:25.118835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.675 [2024-04-26 15:45:25.118851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:86000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.675 [2024-04-26 15:45:25.118871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.675 [2024-04-26 15:45:25.118887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:86008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.675 [2024-04-26 15:45:25.118901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.675 [2024-04-26 15:45:25.118917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:86016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.675 [2024-04-26 15:45:25.118931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.675 [2024-04-26 15:45:25.118946] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff7550 is same with the state(5) to be set 00:27:09.675 [2024-04-26 15:45:25.118968] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.675 [2024-04-26 15:45:25.118979] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.675 [2024-04-26 15:45:25.118990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:86024 len:8 PRP1 0x0 PRP2 0x0 00:27:09.675 [2024-04-26 15:45:25.119005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.675 [2024-04-26 15:45:25.119064] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xff7550 was disconnected and freed. reset controller. 00:27:09.675 [2024-04-26 15:45:25.119082] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:27:09.675 [2024-04-26 15:45:25.119150] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:09.675 [2024-04-26 15:45:25.119174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.675 [2024-04-26 15:45:25.119190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:09.675 [2024-04-26 15:45:25.119204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.675 [2024-04-26 15:45:25.119218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:09.675 [2024-04-26 15:45:25.119232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.675 [2024-04-26 15:45:25.119246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:09.675 [2024-04-26 15:45:25.119260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.675 [2024-04-26 15:45:25.119273] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:09.675 [2024-04-26 15:45:25.119308] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfad740 (9): Bad file descriptor 00:27:09.675 [2024-04-26 15:45:25.123097] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:09.675 [2024-04-26 15:45:25.156366] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:09.675 [2024-04-26 15:45:28.785931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:102344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.675 [2024-04-26 15:45:28.785982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.675 [2024-04-26 15:45:28.786009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:102352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.675 [2024-04-26 15:45:28.786053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.675 [2024-04-26 15:45:28.786072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:102360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.675 [2024-04-26 15:45:28.786087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.675 [2024-04-26 15:45:28.786103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:102368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.675 [2024-04-26 15:45:28.786117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.675 [2024-04-26 15:45:28.786132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:102376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.675 [2024-04-26 15:45:28.786146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.675 [2024-04-26 15:45:28.786162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:102384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.675 [2024-04-26 15:45:28.786192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.675 [2024-04-26 15:45:28.786209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:102392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.675 [2024-04-26 15:45:28.786223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.675 [2024-04-26 15:45:28.786239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:102400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.675 [2024-04-26 15:45:28.786253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.675 [2024-04-26 15:45:28.786269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:102408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.675 [2024-04-26 15:45:28.786283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.675 [2024-04-26 15:45:28.786298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:102416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.675 [2024-04-26 15:45:28.786312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.675 [2024-04-26 15:45:28.786328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:102424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.675 [2024-04-26 15:45:28.786347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.676 [2024-04-26 15:45:28.786363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:102432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.676 [2024-04-26 15:45:28.786377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.676 [2024-04-26 15:45:28.786392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:102440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.676 [2024-04-26 15:45:28.786406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.676 [2024-04-26 15:45:28.786422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:102448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.676 [2024-04-26 15:45:28.786436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.676 [2024-04-26 15:45:28.786459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:102456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.676 [2024-04-26 15:45:28.786475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.676 [2024-04-26 15:45:28.786491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:103056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.676 [2024-04-26 15:45:28.786505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.676 [2024-04-26 15:45:28.786520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:103064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.676 [2024-04-26 15:45:28.786536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.676 [2024-04-26 15:45:28.786552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:103072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.676 [2024-04-26 15:45:28.786570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.676 [2024-04-26 15:45:28.786586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.676 [2024-04-26 15:45:28.786600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.676 [2024-04-26 15:45:28.786616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:103088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.676 [2024-04-26 15:45:28.786630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.676 [2024-04-26 15:45:28.786645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:103096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.676 [2024-04-26 15:45:28.786659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.676 [2024-04-26 15:45:28.786674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:103104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.676 [2024-04-26 15:45:28.786688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.676 [2024-04-26 15:45:28.786703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:103112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.676 [2024-04-26 15:45:28.786717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.676 [2024-04-26 15:45:28.786732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:103120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.676 [2024-04-26 15:45:28.786747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.676 [2024-04-26 15:45:28.786762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:103128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.676 [2024-04-26 15:45:28.786776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.676 [2024-04-26 15:45:28.786791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.676 [2024-04-26 15:45:28.786805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.676 [2024-04-26 15:45:28.786821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:103144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.676 [2024-04-26 15:45:28.786835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.676 [2024-04-26 15:45:28.786857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:103152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.676 [2024-04-26 15:45:28.786872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.676 [2024-04-26 15:45:28.786887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:103160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.676 [2024-04-26 15:45:28.786902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.676 [2024-04-26 15:45:28.786918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:103168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.676 [2024-04-26 15:45:28.786932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.676 [2024-04-26 15:45:28.786947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:103176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.676 [2024-04-26 15:45:28.786961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.676 [2024-04-26 15:45:28.786976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:103184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.676 [2024-04-26 15:45:28.786991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.676 [2024-04-26 15:45:28.787006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:103192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.676 [2024-04-26 15:45:28.787022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.676 [2024-04-26 15:45:28.787038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:103200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.676 [2024-04-26 15:45:28.787053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.676 [2024-04-26 15:45:28.787068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:103208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.676 [2024-04-26 15:45:28.787082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.676 [2024-04-26 15:45:28.787097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:103216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.676 [2024-04-26 15:45:28.787112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.676 [2024-04-26 15:45:28.787127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:103224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.676 [2024-04-26 15:45:28.787155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.676 [2024-04-26 15:45:28.787172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:103232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.676 [2024-04-26 15:45:28.787186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.676 [2024-04-26 15:45:28.787202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:103240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.676 [2024-04-26 15:45:28.787218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.676 [2024-04-26 15:45:28.787233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:103248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.676 [2024-04-26 15:45:28.787256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.676 [2024-04-26 15:45:28.787273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:103256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.676 [2024-04-26 15:45:28.787288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.676 [2024-04-26 15:45:28.787303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:103264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.676 [2024-04-26 15:45:28.787318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.676 [2024-04-26 15:45:28.787333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:103272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.676 [2024-04-26 15:45:28.787348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.677 [2024-04-26 15:45:28.787363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:103280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.677 [2024-04-26 15:45:28.787377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.677 [2024-04-26 15:45:28.787393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:103288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.677 [2024-04-26 15:45:28.787407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.677 [2024-04-26 15:45:28.787423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:103296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.677 [2024-04-26 15:45:28.787437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.677 [2024-04-26 15:45:28.787452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:102464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.677 [2024-04-26 15:45:28.787467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.677 [2024-04-26 15:45:28.787483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:102472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.677 [2024-04-26 15:45:28.787497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.677 [2024-04-26 15:45:28.787512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:102480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.677 [2024-04-26 15:45:28.787526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.677 [2024-04-26 15:45:28.787542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:102488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.677 [2024-04-26 15:45:28.787556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.677 [2024-04-26 15:45:28.787572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:102496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.677 [2024-04-26 15:45:28.787586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.677 [2024-04-26 15:45:28.787602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:102504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.677 [2024-04-26 15:45:28.787616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.677 [2024-04-26 15:45:28.787638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:102512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.677 [2024-04-26 15:45:28.787653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.677 [2024-04-26 15:45:28.787669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:102520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.677 [2024-04-26 15:45:28.787684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.677 [2024-04-26 15:45:28.787699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:102528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.677 [2024-04-26 15:45:28.787714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.677 [2024-04-26 15:45:28.787730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:102536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.677 [2024-04-26 15:45:28.787744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.677 [2024-04-26 15:45:28.787759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:102544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.677 [2024-04-26 15:45:28.787774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.677 [2024-04-26 15:45:28.787790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:102552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.677 [2024-04-26 15:45:28.787804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.677 [2024-04-26 15:45:28.787820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:102560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.677 [2024-04-26 15:45:28.787834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.677 [2024-04-26 15:45:28.787850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.677 [2024-04-26 15:45:28.787864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.677 [2024-04-26 15:45:28.787880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:102576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.677 [2024-04-26 15:45:28.787895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.677 [2024-04-26 15:45:28.787910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:102584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.677 [2024-04-26 15:45:28.787924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.677 [2024-04-26 15:45:28.787941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:102592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.677 [2024-04-26 15:45:28.787962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.677 [2024-04-26 15:45:28.787977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:102600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.677 [2024-04-26 15:45:28.787991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.677 [2024-04-26 15:45:28.788007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:102608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.677 [2024-04-26 15:45:28.788027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.677 [2024-04-26 15:45:28.788044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:102616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.677 [2024-04-26 15:45:28.788058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.677 [2024-04-26 15:45:28.788074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:102624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.677 [2024-04-26 15:45:28.788088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.677 [2024-04-26 15:45:28.788103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:102632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.677 [2024-04-26 15:45:28.788118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.677 [2024-04-26 15:45:28.788133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:102640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.677 [2024-04-26 15:45:28.788162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.677 [2024-04-26 15:45:28.788179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:102648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.677 [2024-04-26 15:45:28.788194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.677 [2024-04-26 15:45:28.788209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:102656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.677 [2024-04-26 15:45:28.788224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.677 [2024-04-26 15:45:28.788239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:102664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.677 [2024-04-26 15:45:28.788254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.677 [2024-04-26 15:45:28.788269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:102672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.677 [2024-04-26 15:45:28.788284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.677 [2024-04-26 15:45:28.788300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:102680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.677 [2024-04-26 15:45:28.788314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.677 [2024-04-26 15:45:28.788330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:102688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.677 [2024-04-26 15:45:28.788364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.677 [2024-04-26 15:45:28.788381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:102696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.677 [2024-04-26 15:45:28.788395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.677 [2024-04-26 15:45:28.788411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:102704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.677 [2024-04-26 15:45:28.788425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.677 [2024-04-26 15:45:28.788449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.677 [2024-04-26 15:45:28.788464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.677 [2024-04-26 15:45:28.788479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:102720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.677 [2024-04-26 15:45:28.788494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.677 [2024-04-26 15:45:28.788509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:102728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.677 [2024-04-26 15:45:28.788523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.677 [2024-04-26 15:45:28.788539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:102736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.677 [2024-04-26 15:45:28.788553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.677 [2024-04-26 15:45:28.788568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:102744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.677 [2024-04-26 15:45:28.788583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.678 [2024-04-26 15:45:28.788598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:102752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.678 [2024-04-26 15:45:28.788612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.678 [2024-04-26 15:45:28.788628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:102760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.678 [2024-04-26 15:45:28.788642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.678 [2024-04-26 15:45:28.788660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:102768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.678 [2024-04-26 15:45:28.788674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.678 [2024-04-26 15:45:28.788689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:102776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.678 [2024-04-26 15:45:28.788703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.678 [2024-04-26 15:45:28.788718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:102784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.678 [2024-04-26 15:45:28.788733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.678 [2024-04-26 15:45:28.788748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:102792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.678 [2024-04-26 15:45:28.788762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.678 [2024-04-26 15:45:28.788778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.678 [2024-04-26 15:45:28.788800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.678 [2024-04-26 15:45:28.788816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:102808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.678 [2024-04-26 15:45:28.788836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.678 [2024-04-26 15:45:28.788852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:102816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.678 [2024-04-26 15:45:28.788867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.678 [2024-04-26 15:45:28.788883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:102824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.678 [2024-04-26 15:45:28.788897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.678 [2024-04-26 15:45:28.788913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:102832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.678 [2024-04-26 15:45:28.788927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.678 [2024-04-26 15:45:28.788942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:102840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.678 [2024-04-26 15:45:28.788956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.678 [2024-04-26 15:45:28.788971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:102848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.678 [2024-04-26 15:45:28.788986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.678 [2024-04-26 15:45:28.789001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:102856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.678 [2024-04-26 15:45:28.789015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.678 [2024-04-26 15:45:28.789030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:102864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.678 [2024-04-26 15:45:28.789045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.678 [2024-04-26 15:45:28.789060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:102872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.678 [2024-04-26 15:45:28.789074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.678 [2024-04-26 15:45:28.789090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:102880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.678 [2024-04-26 15:45:28.789104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.678 [2024-04-26 15:45:28.789120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:102888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.678 [2024-04-26 15:45:28.789144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.678 [2024-04-26 15:45:28.789163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:102896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.678 [2024-04-26 15:45:28.789178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.678 [2024-04-26 15:45:28.789193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:102904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.678 [2024-04-26 15:45:28.789207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.678 [2024-04-26 15:45:28.789230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:102912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.678 [2024-04-26 15:45:28.789245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.678 [2024-04-26 15:45:28.789260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.678 [2024-04-26 15:45:28.789275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.678 [2024-04-26 15:45:28.789290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:102928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.678 [2024-04-26 15:45:28.789310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.678 [2024-04-26 15:45:28.789326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:102936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.678 [2024-04-26 15:45:28.789340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.678 [2024-04-26 15:45:28.789355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:102944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.678 [2024-04-26 15:45:28.789369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.678 [2024-04-26 15:45:28.789385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:102952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.678 [2024-04-26 15:45:28.789399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.678 [2024-04-26 15:45:28.789415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:102960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.678 [2024-04-26 15:45:28.789429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.678 [2024-04-26 15:45:28.789444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:102968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.678 [2024-04-26 15:45:28.789458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.678 [2024-04-26 15:45:28.789473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:102976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.678 [2024-04-26 15:45:28.789488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.678 [2024-04-26 15:45:28.789503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:102984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.678 [2024-04-26 15:45:28.789518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.678 [2024-04-26 15:45:28.789533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:103304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.678 [2024-04-26 15:45:28.789547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.678 [2024-04-26 15:45:28.789565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:103312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.678 [2024-04-26 15:45:28.789579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.678 [2024-04-26 15:45:28.789602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:103320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.678 [2024-04-26 15:45:28.789623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.678 [2024-04-26 15:45:28.789639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:103328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.678 [2024-04-26 15:45:28.789654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.678 [2024-04-26 15:45:28.789669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:103336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.678 [2024-04-26 15:45:28.789683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.678 [2024-04-26 15:45:28.789699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:103344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.678 [2024-04-26 15:45:28.789713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.678 [2024-04-26 15:45:28.789728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:103352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.678 [2024-04-26 15:45:28.789742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.678 [2024-04-26 15:45:28.789758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:103360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.678 [2024-04-26 15:45:28.789772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.678 [2024-04-26 15:45:28.789787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:102992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.678 [2024-04-26 15:45:28.789807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.679 [2024-04-26 15:45:28.789823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:103000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.679 [2024-04-26 15:45:28.789837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.679 [2024-04-26 15:45:28.789853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:103008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.679 [2024-04-26 15:45:28.789867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.679 [2024-04-26 15:45:28.789883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.679 [2024-04-26 15:45:28.789897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.679 [2024-04-26 15:45:28.789912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:103024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.679 [2024-04-26 15:45:28.789926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.679 [2024-04-26 15:45:28.789947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:103032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.679 [2024-04-26 15:45:28.789968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.679 [2024-04-26 15:45:28.789984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:103040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.679 [2024-04-26 15:45:28.790006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.679 [2024-04-26 15:45:28.790020] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfaddf0 is same with the state(5) to be set 00:27:09.679 [2024-04-26 15:45:28.790047] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.679 [2024-04-26 15:45:28.790059] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.679 [2024-04-26 15:45:28.790070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:103048 len:8 PRP1 0x0 PRP2 0x0 00:27:09.679 [2024-04-26 15:45:28.790083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.679 [2024-04-26 15:45:28.790151] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xfaddf0 was disconnected and freed. reset controller. 00:27:09.679 [2024-04-26 15:45:28.790177] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:27:09.679 [2024-04-26 15:45:28.790231] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:09.679 [2024-04-26 15:45:28.790252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.679 [2024-04-26 15:45:28.790267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:09.679 [2024-04-26 15:45:28.790282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.679 [2024-04-26 15:45:28.790296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:09.679 [2024-04-26 15:45:28.790310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.679 [2024-04-26 15:45:28.790324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:09.679 [2024-04-26 15:45:28.790338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.679 [2024-04-26 15:45:28.790352] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:09.679 [2024-04-26 15:45:28.794164] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:09.679 [2024-04-26 15:45:28.794204] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfad740 (9): Bad file descriptor 00:27:09.679 [2024-04-26 15:45:28.835789] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:09.679 [2024-04-26 15:45:33.346643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:55416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.679 [2024-04-26 15:45:33.346758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.679 [2024-04-26 15:45:33.346802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:55424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.679 [2024-04-26 15:45:33.346828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.679 [2024-04-26 15:45:33.346857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:55432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.679 [2024-04-26 15:45:33.346880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.679 [2024-04-26 15:45:33.346906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:55440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.679 [2024-04-26 15:45:33.346929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.679 [2024-04-26 15:45:33.346954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:55448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.679 [2024-04-26 15:45:33.347029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.679 [2024-04-26 15:45:33.347056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:55456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.679 [2024-04-26 15:45:33.347080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.679 [2024-04-26 15:45:33.347105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:55464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.679 [2024-04-26 15:45:33.347128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.679 [2024-04-26 15:45:33.347178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:55472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.679 [2024-04-26 15:45:33.347203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.679 [2024-04-26 15:45:33.347228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:55480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.679 [2024-04-26 15:45:33.347250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.679 [2024-04-26 15:45:33.347275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:55488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.679 [2024-04-26 15:45:33.347297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.679 [2024-04-26 15:45:33.347321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:55496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.679 [2024-04-26 15:45:33.347343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.679 [2024-04-26 15:45:33.347367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:55504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.679 [2024-04-26 15:45:33.347391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.679 [2024-04-26 15:45:33.347415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:55512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.679 [2024-04-26 15:45:33.347438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.679 [2024-04-26 15:45:33.347463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:55520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.679 [2024-04-26 15:45:33.347486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.679 [2024-04-26 15:45:33.347513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:55528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.679 [2024-04-26 15:45:33.347536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.679 [2024-04-26 15:45:33.347560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:55536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.679 [2024-04-26 15:45:33.347583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.679 [2024-04-26 15:45:33.347609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:54968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.679 [2024-04-26 15:45:33.347643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.679 [2024-04-26 15:45:33.347681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:54976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.679 [2024-04-26 15:45:33.347705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.679 [2024-04-26 15:45:33.347734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:54984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.679 [2024-04-26 15:45:33.347757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.679 [2024-04-26 15:45:33.347782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:54992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.679 [2024-04-26 15:45:33.347804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.679 [2024-04-26 15:45:33.347829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:55000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.679 [2024-04-26 15:45:33.347852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.679 [2024-04-26 15:45:33.347878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:55008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.679 [2024-04-26 15:45:33.347900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.679 [2024-04-26 15:45:33.347926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:55016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.679 [2024-04-26 15:45:33.347948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.679 [2024-04-26 15:45:33.347973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:55024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.679 [2024-04-26 15:45:33.347997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.680 [2024-04-26 15:45:33.348033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:55032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.680 [2024-04-26 15:45:33.348055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.680 [2024-04-26 15:45:33.348080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:55040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.680 [2024-04-26 15:45:33.348103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.680 [2024-04-26 15:45:33.348128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:55048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.680 [2024-04-26 15:45:33.348171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.680 [2024-04-26 15:45:33.348198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:55056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.680 [2024-04-26 15:45:33.348221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.680 [2024-04-26 15:45:33.348246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:55064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.680 [2024-04-26 15:45:33.348269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.680 [2024-04-26 15:45:33.348293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:55072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.680 [2024-04-26 15:45:33.348326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.680 [2024-04-26 15:45:33.348380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:55080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.680 [2024-04-26 15:45:33.348404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.680 [2024-04-26 15:45:33.348429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:55088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.680 [2024-04-26 15:45:33.348454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.680 [2024-04-26 15:45:33.348480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:55096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.680 [2024-04-26 15:45:33.348502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.680 [2024-04-26 15:45:33.348527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:55104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.680 [2024-04-26 15:45:33.348549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.680 [2024-04-26 15:45:33.348573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:55112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.680 [2024-04-26 15:45:33.348617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.680 [2024-04-26 15:45:33.348647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:55120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.680 [2024-04-26 15:45:33.348674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.680 [2024-04-26 15:45:33.348703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:55128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.680 [2024-04-26 15:45:33.348730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.680 [2024-04-26 15:45:33.348760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:55136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.680 [2024-04-26 15:45:33.348787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.680 [2024-04-26 15:45:33.348816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:55144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.680 [2024-04-26 15:45:33.348843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.680 [2024-04-26 15:45:33.348883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:55152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.680 [2024-04-26 15:45:33.348910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.680 [2024-04-26 15:45:33.348940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:55160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.680 [2024-04-26 15:45:33.348966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.680 [2024-04-26 15:45:33.348995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:55168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.680 [2024-04-26 15:45:33.349029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.680 [2024-04-26 15:45:33.349071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:55176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.680 [2024-04-26 15:45:33.349100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.680 [2024-04-26 15:45:33.349131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:55184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.680 [2024-04-26 15:45:33.349178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.680 [2024-04-26 15:45:33.349211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:55192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.680 [2024-04-26 15:45:33.349239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.680 [2024-04-26 15:45:33.349269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:55200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.680 [2024-04-26 15:45:33.349295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.680 [2024-04-26 15:45:33.349325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:55208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.680 [2024-04-26 15:45:33.349352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.680 [2024-04-26 15:45:33.349383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:55216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.680 [2024-04-26 15:45:33.349410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.680 [2024-04-26 15:45:33.349440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:55224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.680 [2024-04-26 15:45:33.349467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.680 [2024-04-26 15:45:33.349496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:55232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.680 [2024-04-26 15:45:33.349524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.680 [2024-04-26 15:45:33.349554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:55240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.680 [2024-04-26 15:45:33.349581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.680 [2024-04-26 15:45:33.349610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:55248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.680 [2024-04-26 15:45:33.349637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.680 [2024-04-26 15:45:33.349666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:55256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.680 [2024-04-26 15:45:33.349693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.680 [2024-04-26 15:45:33.349723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:55264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.680 [2024-04-26 15:45:33.349749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.680 [2024-04-26 15:45:33.349779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:55272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.680 [2024-04-26 15:45:33.349806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.680 [2024-04-26 15:45:33.349848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:55280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.680 [2024-04-26 15:45:33.349876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.680 [2024-04-26 15:45:33.349906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:55288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.680 [2024-04-26 15:45:33.349932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.680 [2024-04-26 15:45:33.349961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:55296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.680 [2024-04-26 15:45:33.349989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.680 [2024-04-26 15:45:33.350019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:55304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.680 [2024-04-26 15:45:33.350046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.680 [2024-04-26 15:45:33.350076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:55312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.680 [2024-04-26 15:45:33.350103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.680 [2024-04-26 15:45:33.350149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:55320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.680 [2024-04-26 15:45:33.350179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.680 [2024-04-26 15:45:33.350210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:55328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.680 [2024-04-26 15:45:33.350236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.680 [2024-04-26 15:45:33.350265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:55336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.680 [2024-04-26 15:45:33.350292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.681 [2024-04-26 15:45:33.350322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:55344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:09.681 [2024-04-26 15:45:33.350350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.681 [2024-04-26 15:45:33.350390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:55544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.681 [2024-04-26 15:45:33.350417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.681 [2024-04-26 15:45:33.350446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:55552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.681 [2024-04-26 15:45:33.350473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.681 [2024-04-26 15:45:33.350502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:55560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.681 [2024-04-26 15:45:33.350530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.681 [2024-04-26 15:45:33.350559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:55568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.681 [2024-04-26 15:45:33.350597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.681 [2024-04-26 15:45:33.350627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:55576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.681 [2024-04-26 15:45:33.350655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.681 [2024-04-26 15:45:33.350684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:55584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.681 [2024-04-26 15:45:33.350711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.681 [2024-04-26 15:45:33.350741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:55592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.681 [2024-04-26 15:45:33.350767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.681 [2024-04-26 15:45:33.350796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:55600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.681 [2024-04-26 15:45:33.350823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.681 [2024-04-26 15:45:33.350852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:55608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.681 [2024-04-26 15:45:33.350878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.681 [2024-04-26 15:45:33.350907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:55616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.681 [2024-04-26 15:45:33.350935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.681 [2024-04-26 15:45:33.350964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.681 [2024-04-26 15:45:33.350991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.681 [2024-04-26 15:45:33.351021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:55632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.681 [2024-04-26 15:45:33.351047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.681 [2024-04-26 15:45:33.351077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:55640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.681 [2024-04-26 15:45:33.351104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.681 [2024-04-26 15:45:33.351149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:55648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.681 [2024-04-26 15:45:33.351179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.681 [2024-04-26 15:45:33.351208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:55656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.681 [2024-04-26 15:45:33.351236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.681 [2024-04-26 15:45:33.351265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:55664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.681 [2024-04-26 15:45:33.351291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.681 [2024-04-26 15:45:33.351332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:55672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.681 [2024-04-26 15:45:33.351359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.681 [2024-04-26 15:45:33.351389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:55680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.681 [2024-04-26 15:45:33.351416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.681 [2024-04-26 15:45:33.351446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:55688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.681 [2024-04-26 15:45:33.351478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.681 [2024-04-26 15:45:33.351507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:55696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.681 [2024-04-26 15:45:33.351534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.681 [2024-04-26 15:45:33.351577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:55704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.681 [2024-04-26 15:45:33.351604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.681 [2024-04-26 15:45:33.351633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:55712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.681 [2024-04-26 15:45:33.351660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.681 [2024-04-26 15:45:33.351689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:55720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.681 [2024-04-26 15:45:33.351716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.681 [2024-04-26 15:45:33.351745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:55728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.681 [2024-04-26 15:45:33.351771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.681 [2024-04-26 15:45:33.351800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:55736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.681 [2024-04-26 15:45:33.351827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.681 [2024-04-26 15:45:33.351857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:55744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.681 [2024-04-26 15:45:33.351883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.681 [2024-04-26 15:45:33.351912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:55752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.681 [2024-04-26 15:45:33.351938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.681 [2024-04-26 15:45:33.351968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:55760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.681 [2024-04-26 15:45:33.351995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.681 [2024-04-26 15:45:33.352025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:55768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.681 [2024-04-26 15:45:33.352061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.681 [2024-04-26 15:45:33.352092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:55776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.681 [2024-04-26 15:45:33.352119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.681 [2024-04-26 15:45:33.352169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:55784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.681 [2024-04-26 15:45:33.352198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.681 [2024-04-26 15:45:33.352227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:55792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.681 [2024-04-26 15:45:33.352254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.681 [2024-04-26 15:45:33.352316] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.682 [2024-04-26 15:45:33.352361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55800 len:8 PRP1 0x0 PRP2 0x0 00:27:09.682 [2024-04-26 15:45:33.352389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.682 [2024-04-26 15:45:33.352422] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.682 [2024-04-26 15:45:33.352442] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.682 [2024-04-26 15:45:33.352462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55808 len:8 PRP1 0x0 PRP2 0x0 00:27:09.682 [2024-04-26 15:45:33.352487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.682 [2024-04-26 15:45:33.352513] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.682 [2024-04-26 15:45:33.352539] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.682 [2024-04-26 15:45:33.352559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55816 len:8 PRP1 0x0 PRP2 0x0 00:27:09.682 [2024-04-26 15:45:33.352593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.682 [2024-04-26 15:45:33.352619] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.682 [2024-04-26 15:45:33.352638] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.682 [2024-04-26 15:45:33.352657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55824 len:8 PRP1 0x0 PRP2 0x0 00:27:09.682 [2024-04-26 15:45:33.352689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.682 [2024-04-26 15:45:33.352715] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.682 [2024-04-26 15:45:33.352734] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.682 [2024-04-26 15:45:33.352753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55832 len:8 PRP1 0x0 PRP2 0x0 00:27:09.682 [2024-04-26 15:45:33.352779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.682 [2024-04-26 15:45:33.352804] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.682 [2024-04-26 15:45:33.352831] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.682 [2024-04-26 15:45:33.352861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55840 len:8 PRP1 0x0 PRP2 0x0 00:27:09.682 [2024-04-26 15:45:33.352886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.682 [2024-04-26 15:45:33.352927] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.682 [2024-04-26 15:45:33.352947] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.682 [2024-04-26 15:45:33.352967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55848 len:8 PRP1 0x0 PRP2 0x0 00:27:09.682 [2024-04-26 15:45:33.352992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.682 [2024-04-26 15:45:33.353027] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.682 [2024-04-26 15:45:33.353045] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.682 [2024-04-26 15:45:33.353064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55856 len:8 PRP1 0x0 PRP2 0x0 00:27:09.682 [2024-04-26 15:45:33.353089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.682 [2024-04-26 15:45:33.353114] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.682 [2024-04-26 15:45:33.353133] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.682 [2024-04-26 15:45:33.353173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55864 len:8 PRP1 0x0 PRP2 0x0 00:27:09.682 [2024-04-26 15:45:33.353199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.682 [2024-04-26 15:45:33.353224] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.682 [2024-04-26 15:45:33.353253] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.682 [2024-04-26 15:45:33.353273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55872 len:8 PRP1 0x0 PRP2 0x0 00:27:09.682 [2024-04-26 15:45:33.353298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.682 [2024-04-26 15:45:33.353323] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.682 [2024-04-26 15:45:33.353349] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.682 [2024-04-26 15:45:33.353369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55880 len:8 PRP1 0x0 PRP2 0x0 00:27:09.682 [2024-04-26 15:45:33.353394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.682 [2024-04-26 15:45:33.353420] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.682 [2024-04-26 15:45:33.353439] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.682 [2024-04-26 15:45:33.353458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55888 len:8 PRP1 0x0 PRP2 0x0 00:27:09.682 [2024-04-26 15:45:33.353483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.682 [2024-04-26 15:45:33.353508] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.682 [2024-04-26 15:45:33.353527] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.682 [2024-04-26 15:45:33.353547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55896 len:8 PRP1 0x0 PRP2 0x0 00:27:09.682 [2024-04-26 15:45:33.353572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.682 [2024-04-26 15:45:33.353597] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.682 [2024-04-26 15:45:33.353616] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.682 [2024-04-26 15:45:33.353636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55904 len:8 PRP1 0x0 PRP2 0x0 00:27:09.682 [2024-04-26 15:45:33.353673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.682 [2024-04-26 15:45:33.353700] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.682 [2024-04-26 15:45:33.353719] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.682 [2024-04-26 15:45:33.353738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55912 len:8 PRP1 0x0 PRP2 0x0 00:27:09.682 [2024-04-26 15:45:33.353764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.682 [2024-04-26 15:45:33.353790] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.682 [2024-04-26 15:45:33.353808] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.682 [2024-04-26 15:45:33.353828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55920 len:8 PRP1 0x0 PRP2 0x0 00:27:09.682 [2024-04-26 15:45:33.353853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.682 [2024-04-26 15:45:33.353879] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.682 [2024-04-26 15:45:33.353897] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.682 [2024-04-26 15:45:33.353917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55928 len:8 PRP1 0x0 PRP2 0x0 00:27:09.682 [2024-04-26 15:45:33.353941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.682 [2024-04-26 15:45:33.353967] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.682 [2024-04-26 15:45:33.353984] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.682 [2024-04-26 15:45:33.354004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55936 len:8 PRP1 0x0 PRP2 0x0 00:27:09.682 [2024-04-26 15:45:33.354030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.682 [2024-04-26 15:45:33.354064] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.682 [2024-04-26 15:45:33.354083] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.682 [2024-04-26 15:45:33.354103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55944 len:8 PRP1 0x0 PRP2 0x0 00:27:09.682 [2024-04-26 15:45:33.354128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.682 [2024-04-26 15:45:33.354181] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.682 [2024-04-26 15:45:33.354201] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.682 [2024-04-26 15:45:33.354220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55952 len:8 PRP1 0x0 PRP2 0x0 00:27:09.682 [2024-04-26 15:45:33.354245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.682 [2024-04-26 15:45:33.354271] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.682 [2024-04-26 15:45:33.354290] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.682 [2024-04-26 15:45:33.354310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55960 len:8 PRP1 0x0 PRP2 0x0 00:27:09.682 [2024-04-26 15:45:33.354335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.682 [2024-04-26 15:45:33.354361] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.682 [2024-04-26 15:45:33.354390] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.682 [2024-04-26 15:45:33.354412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55968 len:8 PRP1 0x0 PRP2 0x0 00:27:09.682 [2024-04-26 15:45:33.354437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.682 [2024-04-26 15:45:33.354464] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.682 [2024-04-26 15:45:33.354483] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.682 [2024-04-26 15:45:33.354502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55976 len:8 PRP1 0x0 PRP2 0x0 00:27:09.682 [2024-04-26 15:45:33.354527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.682 [2024-04-26 15:45:33.354553] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.683 [2024-04-26 15:45:33.354578] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.683 [2024-04-26 15:45:33.354602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:55984 len:8 PRP1 0x0 PRP2 0x0 00:27:09.683 [2024-04-26 15:45:33.354627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.683 [2024-04-26 15:45:33.354654] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.683 [2024-04-26 15:45:33.354673] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.683 [2024-04-26 15:45:33.354692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55352 len:8 PRP1 0x0 PRP2 0x0 00:27:09.683 [2024-04-26 15:45:33.354717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.683 [2024-04-26 15:45:33.354743] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.683 [2024-04-26 15:45:33.354761] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.683 [2024-04-26 15:45:33.354781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55360 len:8 PRP1 0x0 PRP2 0x0 00:27:09.683 [2024-04-26 15:45:33.354807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.683 [2024-04-26 15:45:33.354832] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.683 [2024-04-26 15:45:33.354851] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.683 [2024-04-26 15:45:33.354871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55368 len:8 PRP1 0x0 PRP2 0x0 00:27:09.683 [2024-04-26 15:45:33.354896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.683 [2024-04-26 15:45:33.354921] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.683 [2024-04-26 15:45:33.354940] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.683 [2024-04-26 15:45:33.354959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55376 len:8 PRP1 0x0 PRP2 0x0 00:27:09.683 [2024-04-26 15:45:33.354984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.683 [2024-04-26 15:45:33.355021] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.683 [2024-04-26 15:45:33.355040] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.683 [2024-04-26 15:45:33.355059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55384 len:8 PRP1 0x0 PRP2 0x0 00:27:09.683 [2024-04-26 15:45:33.355084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.683 [2024-04-26 15:45:33.355121] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.683 [2024-04-26 15:45:33.355156] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.683 [2024-04-26 15:45:33.355178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55392 len:8 PRP1 0x0 PRP2 0x0 00:27:09.683 [2024-04-26 15:45:33.355204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.683 [2024-04-26 15:45:33.355229] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.683 [2024-04-26 15:45:33.355257] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.683 [2024-04-26 15:45:33.355277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55400 len:8 PRP1 0x0 PRP2 0x0 00:27:09.683 [2024-04-26 15:45:33.355303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.683 [2024-04-26 15:45:33.355328] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:09.683 [2024-04-26 15:45:33.355346] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:09.683 [2024-04-26 15:45:33.355365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55408 len:8 PRP1 0x0 PRP2 0x0 00:27:09.683 [2024-04-26 15:45:33.355391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.683 [2024-04-26 15:45:33.355491] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1021c90 was disconnected and freed. reset controller. 00:27:09.683 [2024-04-26 15:45:33.355524] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:27:09.683 [2024-04-26 15:45:33.355651] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:09.683 [2024-04-26 15:45:33.355688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.683 [2024-04-26 15:45:33.355718] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:09.683 [2024-04-26 15:45:33.355745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.683 [2024-04-26 15:45:33.355773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:09.683 [2024-04-26 15:45:33.355800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.683 [2024-04-26 15:45:33.355827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:09.683 [2024-04-26 15:45:33.355854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:09.683 [2024-04-26 15:45:33.355880] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:09.683 [2024-04-26 15:45:33.355979] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfad740 (9): Bad file descriptor 00:27:09.683 [2024-04-26 15:45:33.362235] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:09.683 [2024-04-26 15:45:33.398546] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:09.683 00:27:09.683 Latency(us) 00:27:09.683 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:09.683 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:09.683 Verification LBA range: start 0x0 length 0x4000 00:27:09.683 NVMe0n1 : 15.01 9132.82 35.68 227.98 0.00 13643.84 614.40 17635.14 00:27:09.683 =================================================================================================================== 00:27:09.683 Total : 9132.82 35.68 227.98 0.00 13643.84 614.40 17635.14 00:27:09.683 Received shutdown signal, test time was about 15.000000 seconds 00:27:09.683 00:27:09.683 Latency(us) 00:27:09.683 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:09.683 =================================================================================================================== 00:27:09.683 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:09.683 15:45:39 -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:27:09.683 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:09.683 15:45:39 -- host/failover.sh@65 -- # count=3 00:27:09.683 15:45:39 -- host/failover.sh@67 -- # (( count != 3 )) 00:27:09.683 15:45:39 -- host/failover.sh@73 -- # bdevperf_pid=81770 00:27:09.683 15:45:39 -- host/failover.sh@72 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:27:09.683 15:45:39 -- host/failover.sh@75 -- # waitforlisten 81770 /var/tmp/bdevperf.sock 00:27:09.683 15:45:39 -- common/autotest_common.sh@817 -- # '[' -z 81770 ']' 00:27:09.683 15:45:39 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:09.683 15:45:39 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:09.683 15:45:39 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:09.683 15:45:39 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:09.683 15:45:39 -- common/autotest_common.sh@10 -- # set +x 00:27:10.251 15:45:40 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:10.251 15:45:40 -- common/autotest_common.sh@850 -- # return 0 00:27:10.251 15:45:40 -- host/failover.sh@76 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:10.509 [2024-04-26 15:45:40.651588] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:10.509 15:45:40 -- host/failover.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:27:10.767 [2024-04-26 15:45:40.883746] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:27:10.767 15:45:40 -- host/failover.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:11.024 NVMe0n1 00:27:11.024 15:45:41 -- host/failover.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:11.281 00:27:11.281 15:45:41 -- host/failover.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:11.539 00:27:11.539 15:45:41 -- host/failover.sh@82 -- # grep -q NVMe0 00:27:11.539 15:45:41 -- host/failover.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:11.797 15:45:42 -- host/failover.sh@84 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:12.055 15:45:42 -- host/failover.sh@87 -- # sleep 3 00:27:15.504 15:45:45 -- host/failover.sh@88 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:15.504 15:45:45 -- host/failover.sh@88 -- # grep -q NVMe0 00:27:15.504 15:45:45 -- host/failover.sh@90 -- # run_test_pid=81908 00:27:15.504 15:45:45 -- host/failover.sh@89 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:15.504 15:45:45 -- host/failover.sh@92 -- # wait 81908 00:27:16.440 0 00:27:16.440 15:45:46 -- host/failover.sh@94 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:27:16.440 [2024-04-26 15:45:39.433883] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:27:16.440 [2024-04-26 15:45:39.434061] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81770 ] 00:27:16.440 [2024-04-26 15:45:39.578492] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:16.440 [2024-04-26 15:45:39.710548] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:16.440 [2024-04-26 15:45:42.259398] bdev_nvme.c:1856:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:27:16.440 [2024-04-26 15:45:42.259570] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.440 [2024-04-26 15:45:42.259594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.440 [2024-04-26 15:45:42.259614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.440 [2024-04-26 15:45:42.259628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.440 [2024-04-26 15:45:42.259657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.440 [2024-04-26 15:45:42.259670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.440 [2024-04-26 15:45:42.259685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:16.440 [2024-04-26 15:45:42.259698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:16.440 [2024-04-26 15:45:42.259713] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:16.440 [2024-04-26 15:45:42.259768] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1791740 (9): Bad file descriptor 00:27:16.440 [2024-04-26 15:45:42.259799] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:16.440 [2024-04-26 15:45:42.262747] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:16.440 Running I/O for 1 seconds... 00:27:16.440 00:27:16.440 Latency(us) 00:27:16.440 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:16.440 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:16.440 Verification LBA range: start 0x0 length 0x4000 00:27:16.440 NVMe0n1 : 1.01 9127.30 35.65 0.00 0.00 13956.42 2159.71 14596.65 00:27:16.440 =================================================================================================================== 00:27:16.440 Total : 9127.30 35.65 0.00 0.00 13956.42 2159.71 14596.65 00:27:16.440 15:45:46 -- host/failover.sh@95 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:16.440 15:45:46 -- host/failover.sh@95 -- # grep -q NVMe0 00:27:16.697 15:45:46 -- host/failover.sh@98 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:16.955 15:45:47 -- host/failover.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:16.955 15:45:47 -- host/failover.sh@99 -- # grep -q NVMe0 00:27:17.214 15:45:47 -- host/failover.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:17.473 15:45:47 -- host/failover.sh@101 -- # sleep 3 00:27:20.756 15:45:50 -- host/failover.sh@103 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:20.756 15:45:50 -- host/failover.sh@103 -- # grep -q NVMe0 00:27:20.756 15:45:50 -- host/failover.sh@108 -- # killprocess 81770 00:27:20.756 15:45:51 -- common/autotest_common.sh@936 -- # '[' -z 81770 ']' 00:27:20.756 15:45:51 -- common/autotest_common.sh@940 -- # kill -0 81770 00:27:20.756 15:45:51 -- common/autotest_common.sh@941 -- # uname 00:27:20.756 15:45:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:20.756 15:45:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81770 00:27:20.756 15:45:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:20.756 killing process with pid 81770 00:27:20.756 15:45:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:20.756 15:45:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81770' 00:27:20.756 15:45:51 -- common/autotest_common.sh@955 -- # kill 81770 00:27:20.756 15:45:51 -- common/autotest_common.sh@960 -- # wait 81770 00:27:21.321 15:45:51 -- host/failover.sh@110 -- # sync 00:27:21.321 15:45:51 -- host/failover.sh@111 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:21.578 15:45:51 -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:27:21.578 15:45:51 -- host/failover.sh@115 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:27:21.578 15:45:51 -- host/failover.sh@116 -- # nvmftestfini 00:27:21.578 15:45:51 -- nvmf/common.sh@477 -- # nvmfcleanup 00:27:21.578 15:45:51 -- nvmf/common.sh@117 -- # sync 00:27:21.578 15:45:51 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:21.578 15:45:51 -- nvmf/common.sh@120 -- # set +e 00:27:21.578 15:45:51 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:21.578 15:45:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:21.578 rmmod nvme_tcp 00:27:21.578 rmmod nvme_fabrics 00:27:21.578 rmmod nvme_keyring 00:27:21.579 15:45:51 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:21.579 15:45:51 -- nvmf/common.sh@124 -- # set -e 00:27:21.579 15:45:51 -- nvmf/common.sh@125 -- # return 0 00:27:21.579 15:45:51 -- nvmf/common.sh@478 -- # '[' -n 81395 ']' 00:27:21.579 15:45:51 -- nvmf/common.sh@479 -- # killprocess 81395 00:27:21.579 15:45:51 -- common/autotest_common.sh@936 -- # '[' -z 81395 ']' 00:27:21.579 15:45:51 -- common/autotest_common.sh@940 -- # kill -0 81395 00:27:21.579 15:45:51 -- common/autotest_common.sh@941 -- # uname 00:27:21.579 15:45:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:21.579 15:45:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 81395 00:27:21.579 15:45:51 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:27:21.579 killing process with pid 81395 00:27:21.579 15:45:51 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:27:21.579 15:45:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 81395' 00:27:21.579 15:45:51 -- common/autotest_common.sh@955 -- # kill 81395 00:27:21.579 15:45:51 -- common/autotest_common.sh@960 -- # wait 81395 00:27:22.158 15:45:52 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:27:22.158 15:45:52 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:27:22.158 15:45:52 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:27:22.158 15:45:52 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:22.158 15:45:52 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:22.158 15:45:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:22.158 15:45:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:22.158 15:45:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:22.158 15:45:52 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:27:22.158 00:27:22.158 real 0m33.324s 00:27:22.158 user 2m9.158s 00:27:22.158 sys 0m5.087s 00:27:22.158 15:45:52 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:22.158 15:45:52 -- common/autotest_common.sh@10 -- # set +x 00:27:22.158 ************************************ 00:27:22.158 END TEST nvmf_failover 00:27:22.158 ************************************ 00:27:22.158 15:45:52 -- nvmf/nvmf.sh@99 -- # run_test nvmf_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:27:22.158 15:45:52 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:22.158 15:45:52 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:22.158 15:45:52 -- common/autotest_common.sh@10 -- # set +x 00:27:22.158 ************************************ 00:27:22.158 START TEST nvmf_discovery 00:27:22.158 ************************************ 00:27:22.158 15:45:52 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:27:22.158 * Looking for test storage... 00:27:22.158 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:22.158 15:45:52 -- host/discovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:22.158 15:45:52 -- nvmf/common.sh@7 -- # uname -s 00:27:22.158 15:45:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:22.158 15:45:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:22.158 15:45:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:22.158 15:45:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:22.158 15:45:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:22.158 15:45:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:22.158 15:45:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:22.158 15:45:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:22.158 15:45:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:22.158 15:45:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:22.158 15:45:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:27:22.158 15:45:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:27:22.158 15:45:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:22.158 15:45:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:22.158 15:45:52 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:22.158 15:45:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:22.158 15:45:52 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:22.158 15:45:52 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:22.158 15:45:52 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:22.158 15:45:52 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:22.158 15:45:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.158 15:45:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.158 15:45:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.158 15:45:52 -- paths/export.sh@5 -- # export PATH 00:27:22.158 15:45:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.158 15:45:52 -- nvmf/common.sh@47 -- # : 0 00:27:22.158 15:45:52 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:22.158 15:45:52 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:22.158 15:45:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:22.158 15:45:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:22.158 15:45:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:22.158 15:45:52 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:22.158 15:45:52 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:22.158 15:45:52 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:22.158 15:45:52 -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:27:22.158 15:45:52 -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:27:22.158 15:45:52 -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:27:22.158 15:45:52 -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:27:22.158 15:45:52 -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:27:22.158 15:45:52 -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:27:22.158 15:45:52 -- host/discovery.sh@25 -- # nvmftestinit 00:27:22.158 15:45:52 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:27:22.158 15:45:52 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:22.158 15:45:52 -- nvmf/common.sh@437 -- # prepare_net_devs 00:27:22.158 15:45:52 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:27:22.158 15:45:52 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:27:22.158 15:45:52 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:22.158 15:45:52 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:22.158 15:45:52 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:22.158 15:45:52 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:27:22.158 15:45:52 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:27:22.158 15:45:52 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:27:22.158 15:45:52 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:27:22.158 15:45:52 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:27:22.158 15:45:52 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:27:22.158 15:45:52 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:22.158 15:45:52 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:22.158 15:45:52 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:22.158 15:45:52 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:27:22.158 15:45:52 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:22.158 15:45:52 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:22.158 15:45:52 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:22.158 15:45:52 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:22.158 15:45:52 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:22.158 15:45:52 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:22.158 15:45:52 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:22.158 15:45:52 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:22.158 15:45:52 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:27:22.415 15:45:52 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:27:22.415 Cannot find device "nvmf_tgt_br" 00:27:22.415 15:45:52 -- nvmf/common.sh@155 -- # true 00:27:22.415 15:45:52 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:27:22.415 Cannot find device "nvmf_tgt_br2" 00:27:22.415 15:45:52 -- nvmf/common.sh@156 -- # true 00:27:22.415 15:45:52 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:27:22.415 15:45:52 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:27:22.415 Cannot find device "nvmf_tgt_br" 00:27:22.415 15:45:52 -- nvmf/common.sh@158 -- # true 00:27:22.415 15:45:52 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:27:22.415 Cannot find device "nvmf_tgt_br2" 00:27:22.415 15:45:52 -- nvmf/common.sh@159 -- # true 00:27:22.415 15:45:52 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:27:22.415 15:45:52 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:27:22.415 15:45:52 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:22.415 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:22.415 15:45:52 -- nvmf/common.sh@162 -- # true 00:27:22.415 15:45:52 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:22.415 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:22.415 15:45:52 -- nvmf/common.sh@163 -- # true 00:27:22.415 15:45:52 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:27:22.415 15:45:52 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:22.415 15:45:52 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:22.415 15:45:52 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:22.415 15:45:52 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:22.415 15:45:52 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:22.415 15:45:52 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:22.415 15:45:52 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:22.415 15:45:52 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:22.415 15:45:52 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:27:22.415 15:45:52 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:27:22.415 15:45:52 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:27:22.415 15:45:52 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:27:22.415 15:45:52 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:22.415 15:45:52 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:22.415 15:45:52 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:22.415 15:45:52 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:27:22.415 15:45:52 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:27:22.415 15:45:52 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:27:22.672 15:45:52 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:22.672 15:45:52 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:22.672 15:45:52 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:22.672 15:45:52 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:22.672 15:45:52 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:27:22.672 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:22.672 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:27:22.672 00:27:22.672 --- 10.0.0.2 ping statistics --- 00:27:22.672 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:22.673 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:27:22.673 15:45:52 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:27:22.673 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:22.673 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:27:22.673 00:27:22.673 --- 10.0.0.3 ping statistics --- 00:27:22.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:22.673 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:27:22.673 15:45:52 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:22.673 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:22.673 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:27:22.673 00:27:22.673 --- 10.0.0.1 ping statistics --- 00:27:22.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:22.673 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:27:22.673 15:45:52 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:22.673 15:45:52 -- nvmf/common.sh@422 -- # return 0 00:27:22.673 15:45:52 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:27:22.673 15:45:52 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:22.673 15:45:52 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:27:22.673 15:45:52 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:27:22.673 15:45:52 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:22.673 15:45:52 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:27:22.673 15:45:52 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:27:22.673 15:45:52 -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:27:22.673 15:45:52 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:27:22.673 15:45:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:22.673 15:45:52 -- common/autotest_common.sh@10 -- # set +x 00:27:22.673 15:45:52 -- nvmf/common.sh@470 -- # nvmfpid=82216 00:27:22.673 15:45:52 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:22.673 15:45:52 -- nvmf/common.sh@471 -- # waitforlisten 82216 00:27:22.673 15:45:52 -- common/autotest_common.sh@817 -- # '[' -z 82216 ']' 00:27:22.673 15:45:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:22.673 15:45:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:22.673 15:45:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:22.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:22.673 15:45:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:22.673 15:45:52 -- common/autotest_common.sh@10 -- # set +x 00:27:22.673 [2024-04-26 15:45:52.858977] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:27:22.673 [2024-04-26 15:45:52.859077] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:22.931 [2024-04-26 15:45:52.997635] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:22.931 [2024-04-26 15:45:53.152642] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:22.931 [2024-04-26 15:45:53.152718] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:22.931 [2024-04-26 15:45:53.152730] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:22.931 [2024-04-26 15:45:53.152738] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:22.931 [2024-04-26 15:45:53.152746] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:22.931 [2024-04-26 15:45:53.152799] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:23.864 15:45:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:23.864 15:45:53 -- common/autotest_common.sh@850 -- # return 0 00:27:23.864 15:45:53 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:27:23.864 15:45:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:23.864 15:45:53 -- common/autotest_common.sh@10 -- # set +x 00:27:23.864 15:45:53 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:23.864 15:45:53 -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:23.864 15:45:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:23.864 15:45:53 -- common/autotest_common.sh@10 -- # set +x 00:27:23.864 [2024-04-26 15:45:53.924809] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:23.864 15:45:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:23.864 15:45:53 -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:27:23.864 15:45:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:23.864 15:45:53 -- common/autotest_common.sh@10 -- # set +x 00:27:23.864 [2024-04-26 15:45:53.932952] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:23.864 15:45:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:23.864 15:45:53 -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:27:23.864 15:45:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:23.864 15:45:53 -- common/autotest_common.sh@10 -- # set +x 00:27:23.865 null0 00:27:23.865 15:45:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:23.865 15:45:53 -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:27:23.865 15:45:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:23.865 15:45:53 -- common/autotest_common.sh@10 -- # set +x 00:27:23.865 null1 00:27:23.865 15:45:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:23.865 15:45:53 -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:27:23.865 15:45:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:23.865 15:45:53 -- common/autotest_common.sh@10 -- # set +x 00:27:23.865 15:45:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:23.865 15:45:53 -- host/discovery.sh@45 -- # hostpid=82266 00:27:23.865 15:45:53 -- host/discovery.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:27:23.865 15:45:53 -- host/discovery.sh@46 -- # waitforlisten 82266 /tmp/host.sock 00:27:23.865 15:45:53 -- common/autotest_common.sh@817 -- # '[' -z 82266 ']' 00:27:23.865 15:45:53 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:27:23.865 15:45:53 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:23.865 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:23.865 15:45:53 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:23.865 15:45:53 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:23.865 15:45:53 -- common/autotest_common.sh@10 -- # set +x 00:27:23.865 [2024-04-26 15:45:54.012642] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:27:23.865 [2024-04-26 15:45:54.012749] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82266 ] 00:27:23.865 [2024-04-26 15:45:54.150278] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:24.123 [2024-04-26 15:45:54.269415] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:24.690 15:45:54 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:24.690 15:45:54 -- common/autotest_common.sh@850 -- # return 0 00:27:24.690 15:45:54 -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:24.690 15:45:54 -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:27:24.690 15:45:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:24.690 15:45:54 -- common/autotest_common.sh@10 -- # set +x 00:27:24.690 15:45:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:24.690 15:45:54 -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:27:24.690 15:45:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:24.690 15:45:54 -- common/autotest_common.sh@10 -- # set +x 00:27:24.949 15:45:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:24.949 15:45:54 -- host/discovery.sh@72 -- # notify_id=0 00:27:24.949 15:45:54 -- host/discovery.sh@83 -- # get_subsystem_names 00:27:24.949 15:45:54 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:24.949 15:45:54 -- host/discovery.sh@59 -- # sort 00:27:24.949 15:45:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:24.949 15:45:54 -- common/autotest_common.sh@10 -- # set +x 00:27:24.949 15:45:54 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:24.949 15:45:54 -- host/discovery.sh@59 -- # xargs 00:27:24.949 15:45:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:24.949 15:45:55 -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:27:24.949 15:45:55 -- host/discovery.sh@84 -- # get_bdev_list 00:27:24.949 15:45:55 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:24.949 15:45:55 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:24.949 15:45:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:24.949 15:45:55 -- host/discovery.sh@55 -- # sort 00:27:24.949 15:45:55 -- common/autotest_common.sh@10 -- # set +x 00:27:24.949 15:45:55 -- host/discovery.sh@55 -- # xargs 00:27:24.949 15:45:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:24.949 15:45:55 -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:27:24.949 15:45:55 -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:27:24.949 15:45:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:24.949 15:45:55 -- common/autotest_common.sh@10 -- # set +x 00:27:24.949 15:45:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:24.949 15:45:55 -- host/discovery.sh@87 -- # get_subsystem_names 00:27:24.949 15:45:55 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:24.949 15:45:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:24.949 15:45:55 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:24.949 15:45:55 -- common/autotest_common.sh@10 -- # set +x 00:27:24.949 15:45:55 -- host/discovery.sh@59 -- # sort 00:27:24.949 15:45:55 -- host/discovery.sh@59 -- # xargs 00:27:24.949 15:45:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:24.949 15:45:55 -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:27:24.949 15:45:55 -- host/discovery.sh@88 -- # get_bdev_list 00:27:24.949 15:45:55 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:24.949 15:45:55 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:24.949 15:45:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:24.949 15:45:55 -- host/discovery.sh@55 -- # sort 00:27:24.949 15:45:55 -- common/autotest_common.sh@10 -- # set +x 00:27:24.949 15:45:55 -- host/discovery.sh@55 -- # xargs 00:27:24.949 15:45:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:24.949 15:45:55 -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:27:24.949 15:45:55 -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:27:24.949 15:45:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:24.949 15:45:55 -- common/autotest_common.sh@10 -- # set +x 00:27:24.949 15:45:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:24.949 15:45:55 -- host/discovery.sh@91 -- # get_subsystem_names 00:27:24.949 15:45:55 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:24.949 15:45:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:24.949 15:45:55 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:24.949 15:45:55 -- common/autotest_common.sh@10 -- # set +x 00:27:24.949 15:45:55 -- host/discovery.sh@59 -- # sort 00:27:24.949 15:45:55 -- host/discovery.sh@59 -- # xargs 00:27:24.949 15:45:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:25.207 15:45:55 -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:27:25.207 15:45:55 -- host/discovery.sh@92 -- # get_bdev_list 00:27:25.207 15:45:55 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:25.207 15:45:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:25.207 15:45:55 -- common/autotest_common.sh@10 -- # set +x 00:27:25.207 15:45:55 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:25.207 15:45:55 -- host/discovery.sh@55 -- # sort 00:27:25.207 15:45:55 -- host/discovery.sh@55 -- # xargs 00:27:25.207 15:45:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:25.207 15:45:55 -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:27:25.207 15:45:55 -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:25.207 15:45:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:25.207 15:45:55 -- common/autotest_common.sh@10 -- # set +x 00:27:25.207 [2024-04-26 15:45:55.325415] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:25.207 15:45:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:25.207 15:45:55 -- host/discovery.sh@97 -- # get_subsystem_names 00:27:25.207 15:45:55 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:25.207 15:45:55 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:25.207 15:45:55 -- host/discovery.sh@59 -- # xargs 00:27:25.207 15:45:55 -- host/discovery.sh@59 -- # sort 00:27:25.207 15:45:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:25.207 15:45:55 -- common/autotest_common.sh@10 -- # set +x 00:27:25.207 15:45:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:25.207 15:45:55 -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:27:25.207 15:45:55 -- host/discovery.sh@98 -- # get_bdev_list 00:27:25.207 15:45:55 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:25.207 15:45:55 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:25.207 15:45:55 -- host/discovery.sh@55 -- # sort 00:27:25.207 15:45:55 -- host/discovery.sh@55 -- # xargs 00:27:25.207 15:45:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:25.208 15:45:55 -- common/autotest_common.sh@10 -- # set +x 00:27:25.208 15:45:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:25.208 15:45:55 -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:27:25.208 15:45:55 -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:27:25.208 15:45:55 -- host/discovery.sh@79 -- # expected_count=0 00:27:25.208 15:45:55 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:25.208 15:45:55 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:25.208 15:45:55 -- common/autotest_common.sh@901 -- # local max=10 00:27:25.208 15:45:55 -- common/autotest_common.sh@902 -- # (( max-- )) 00:27:25.208 15:45:55 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:25.208 15:45:55 -- common/autotest_common.sh@903 -- # get_notification_count 00:27:25.208 15:45:55 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:27:25.208 15:45:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:25.208 15:45:55 -- host/discovery.sh@74 -- # jq '. | length' 00:27:25.208 15:45:55 -- common/autotest_common.sh@10 -- # set +x 00:27:25.208 15:45:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:25.465 15:45:55 -- host/discovery.sh@74 -- # notification_count=0 00:27:25.465 15:45:55 -- host/discovery.sh@75 -- # notify_id=0 00:27:25.465 15:45:55 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:27:25.465 15:45:55 -- common/autotest_common.sh@904 -- # return 0 00:27:25.465 15:45:55 -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:27:25.465 15:45:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:25.465 15:45:55 -- common/autotest_common.sh@10 -- # set +x 00:27:25.465 15:45:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:25.465 15:45:55 -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:25.465 15:45:55 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:25.465 15:45:55 -- common/autotest_common.sh@901 -- # local max=10 00:27:25.465 15:45:55 -- common/autotest_common.sh@902 -- # (( max-- )) 00:27:25.465 15:45:55 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:25.465 15:45:55 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:27:25.465 15:45:55 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:25.465 15:45:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:25.465 15:45:55 -- host/discovery.sh@59 -- # sort 00:27:25.465 15:45:55 -- common/autotest_common.sh@10 -- # set +x 00:27:25.465 15:45:55 -- host/discovery.sh@59 -- # xargs 00:27:25.465 15:45:55 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:25.465 15:45:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:25.465 15:45:55 -- common/autotest_common.sh@903 -- # [[ '' == \n\v\m\e\0 ]] 00:27:25.465 15:45:55 -- common/autotest_common.sh@906 -- # sleep 1 00:27:25.724 [2024-04-26 15:45:55.998456] bdev_nvme.c:6919:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:25.724 [2024-04-26 15:45:55.998492] bdev_nvme.c:6999:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:25.724 [2024-04-26 15:45:55.998524] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:25.983 [2024-04-26 15:45:56.084628] bdev_nvme.c:6848:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:27:25.983 [2024-04-26 15:45:56.140727] bdev_nvme.c:6738:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:25.983 [2024-04-26 15:45:56.140771] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:26.304 15:45:56 -- common/autotest_common.sh@902 -- # (( max-- )) 00:27:26.304 15:45:56 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:26.304 15:45:56 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:27:26.562 15:45:56 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:26.562 15:45:56 -- host/discovery.sh@59 -- # sort 00:27:26.562 15:45:56 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:26.562 15:45:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:26.562 15:45:56 -- host/discovery.sh@59 -- # xargs 00:27:26.562 15:45:56 -- common/autotest_common.sh@10 -- # set +x 00:27:26.562 15:45:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:26.562 15:45:56 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.562 15:45:56 -- common/autotest_common.sh@904 -- # return 0 00:27:26.562 15:45:56 -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:27:26.562 15:45:56 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:27:26.562 15:45:56 -- common/autotest_common.sh@901 -- # local max=10 00:27:26.562 15:45:56 -- common/autotest_common.sh@902 -- # (( max-- )) 00:27:26.562 15:45:56 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:27:26.562 15:45:56 -- common/autotest_common.sh@903 -- # get_bdev_list 00:27:26.562 15:45:56 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:26.562 15:45:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:26.562 15:45:56 -- common/autotest_common.sh@10 -- # set +x 00:27:26.562 15:45:56 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:26.562 15:45:56 -- host/discovery.sh@55 -- # sort 00:27:26.562 15:45:56 -- host/discovery.sh@55 -- # xargs 00:27:26.562 15:45:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:26.562 15:45:56 -- common/autotest_common.sh@903 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:27:26.562 15:45:56 -- common/autotest_common.sh@904 -- # return 0 00:27:26.562 15:45:56 -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:27:26.562 15:45:56 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:27:26.562 15:45:56 -- common/autotest_common.sh@901 -- # local max=10 00:27:26.562 15:45:56 -- common/autotest_common.sh@902 -- # (( max-- )) 00:27:26.562 15:45:56 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:27:26.562 15:45:56 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:27:26.562 15:45:56 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:26.562 15:45:56 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:26.562 15:45:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:26.562 15:45:56 -- common/autotest_common.sh@10 -- # set +x 00:27:26.562 15:45:56 -- host/discovery.sh@63 -- # sort -n 00:27:26.562 15:45:56 -- host/discovery.sh@63 -- # xargs 00:27:26.562 15:45:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:26.562 15:45:56 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0 ]] 00:27:26.562 15:45:56 -- common/autotest_common.sh@904 -- # return 0 00:27:26.562 15:45:56 -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:27:26.562 15:45:56 -- host/discovery.sh@79 -- # expected_count=1 00:27:26.562 15:45:56 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:26.562 15:45:56 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:26.562 15:45:56 -- common/autotest_common.sh@901 -- # local max=10 00:27:26.562 15:45:56 -- common/autotest_common.sh@902 -- # (( max-- )) 00:27:26.562 15:45:56 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:26.562 15:45:56 -- common/autotest_common.sh@903 -- # get_notification_count 00:27:26.562 15:45:56 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:27:26.562 15:45:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:26.562 15:45:56 -- host/discovery.sh@74 -- # jq '. | length' 00:27:26.562 15:45:56 -- common/autotest_common.sh@10 -- # set +x 00:27:26.562 15:45:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:26.562 15:45:56 -- host/discovery.sh@74 -- # notification_count=1 00:27:26.562 15:45:56 -- host/discovery.sh@75 -- # notify_id=1 00:27:26.562 15:45:56 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:27:26.562 15:45:56 -- common/autotest_common.sh@904 -- # return 0 00:27:26.562 15:45:56 -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:27:26.562 15:45:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:26.562 15:45:56 -- common/autotest_common.sh@10 -- # set +x 00:27:26.562 15:45:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:26.562 15:45:56 -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:26.562 15:45:56 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:26.562 15:45:56 -- common/autotest_common.sh@901 -- # local max=10 00:27:26.562 15:45:56 -- common/autotest_common.sh@902 -- # (( max-- )) 00:27:26.562 15:45:56 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:27:26.562 15:45:56 -- common/autotest_common.sh@903 -- # get_bdev_list 00:27:26.562 15:45:56 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:26.562 15:45:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:26.562 15:45:56 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:26.562 15:45:56 -- common/autotest_common.sh@10 -- # set +x 00:27:26.562 15:45:56 -- host/discovery.sh@55 -- # sort 00:27:26.562 15:45:56 -- host/discovery.sh@55 -- # xargs 00:27:26.562 15:45:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:26.820 15:45:56 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:26.820 15:45:56 -- common/autotest_common.sh@904 -- # return 0 00:27:26.820 15:45:56 -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:27:26.820 15:45:56 -- host/discovery.sh@79 -- # expected_count=1 00:27:26.820 15:45:56 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:26.820 15:45:56 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:26.820 15:45:56 -- common/autotest_common.sh@901 -- # local max=10 00:27:26.820 15:45:56 -- common/autotest_common.sh@902 -- # (( max-- )) 00:27:26.820 15:45:56 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:26.820 15:45:56 -- common/autotest_common.sh@903 -- # get_notification_count 00:27:26.820 15:45:56 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:27:26.820 15:45:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:26.820 15:45:56 -- host/discovery.sh@74 -- # jq '. | length' 00:27:26.820 15:45:56 -- common/autotest_common.sh@10 -- # set +x 00:27:26.820 15:45:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:26.820 15:45:56 -- host/discovery.sh@74 -- # notification_count=1 00:27:26.820 15:45:56 -- host/discovery.sh@75 -- # notify_id=2 00:27:26.820 15:45:56 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:27:26.820 15:45:56 -- common/autotest_common.sh@904 -- # return 0 00:27:26.820 15:45:56 -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:27:26.820 15:45:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:26.820 15:45:56 -- common/autotest_common.sh@10 -- # set +x 00:27:26.820 [2024-04-26 15:45:56.931613] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:26.820 [2024-04-26 15:45:56.932162] bdev_nvme.c:6901:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:26.820 [2024-04-26 15:45:56.932232] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:26.820 15:45:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:26.820 15:45:56 -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:26.820 15:45:56 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:26.820 15:45:56 -- common/autotest_common.sh@901 -- # local max=10 00:27:26.820 15:45:56 -- common/autotest_common.sh@902 -- # (( max-- )) 00:27:26.820 15:45:56 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:26.820 15:45:56 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:27:26.820 15:45:56 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:26.820 15:45:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:26.820 15:45:56 -- common/autotest_common.sh@10 -- # set +x 00:27:26.820 15:45:56 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:26.820 15:45:56 -- host/discovery.sh@59 -- # sort 00:27:26.820 15:45:56 -- host/discovery.sh@59 -- # xargs 00:27:26.820 15:45:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:26.820 15:45:56 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.820 15:45:56 -- common/autotest_common.sh@904 -- # return 0 00:27:26.820 15:45:56 -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:26.820 15:45:56 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:26.820 15:45:56 -- common/autotest_common.sh@901 -- # local max=10 00:27:26.820 15:45:56 -- common/autotest_common.sh@902 -- # (( max-- )) 00:27:26.820 15:45:56 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:27:26.820 15:45:56 -- common/autotest_common.sh@903 -- # get_bdev_list 00:27:26.820 15:45:56 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:26.820 15:45:56 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:26.821 15:45:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:26.821 15:45:56 -- common/autotest_common.sh@10 -- # set +x 00:27:26.821 15:45:56 -- host/discovery.sh@55 -- # xargs 00:27:26.821 15:45:56 -- host/discovery.sh@55 -- # sort 00:27:26.821 [2024-04-26 15:45:57.020229] bdev_nvme.c:6843:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:27:26.821 15:45:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:26.821 15:45:57 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:26.821 15:45:57 -- common/autotest_common.sh@904 -- # return 0 00:27:26.821 15:45:57 -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:27:26.821 15:45:57 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:27:26.821 15:45:57 -- common/autotest_common.sh@901 -- # local max=10 00:27:26.821 15:45:57 -- common/autotest_common.sh@902 -- # (( max-- )) 00:27:26.821 15:45:57 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:27:26.821 15:45:57 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:27:26.821 15:45:57 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:26.821 15:45:57 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:26.821 15:45:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:26.821 15:45:57 -- common/autotest_common.sh@10 -- # set +x 00:27:26.821 15:45:57 -- host/discovery.sh@63 -- # xargs 00:27:26.821 15:45:57 -- host/discovery.sh@63 -- # sort -n 00:27:26.821 15:45:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:26.821 [2024-04-26 15:45:57.080641] bdev_nvme.c:6738:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:26.821 [2024-04-26 15:45:57.080698] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:26.821 [2024-04-26 15:45:57.080717] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:26.821 15:45:57 -- common/autotest_common.sh@903 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:27:26.821 15:45:57 -- common/autotest_common.sh@906 -- # sleep 1 00:27:28.194 15:45:58 -- common/autotest_common.sh@902 -- # (( max-- )) 00:27:28.194 15:45:58 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:27:28.194 15:45:58 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:27:28.194 15:45:58 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:28.194 15:45:58 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:28.194 15:45:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:28.194 15:45:58 -- common/autotest_common.sh@10 -- # set +x 00:27:28.195 15:45:58 -- host/discovery.sh@63 -- # sort -n 00:27:28.195 15:45:58 -- host/discovery.sh@63 -- # xargs 00:27:28.195 15:45:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:28.195 15:45:58 -- common/autotest_common.sh@903 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:27:28.195 15:45:58 -- common/autotest_common.sh@904 -- # return 0 00:27:28.195 15:45:58 -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:27:28.195 15:45:58 -- host/discovery.sh@79 -- # expected_count=0 00:27:28.195 15:45:58 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:28.195 15:45:58 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:28.195 15:45:58 -- common/autotest_common.sh@901 -- # local max=10 00:27:28.195 15:45:58 -- common/autotest_common.sh@902 -- # (( max-- )) 00:27:28.195 15:45:58 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:28.195 15:45:58 -- common/autotest_common.sh@903 -- # get_notification_count 00:27:28.195 15:45:58 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:28.195 15:45:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:28.195 15:45:58 -- common/autotest_common.sh@10 -- # set +x 00:27:28.195 15:45:58 -- host/discovery.sh@74 -- # jq '. | length' 00:27:28.195 15:45:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:28.195 15:45:58 -- host/discovery.sh@74 -- # notification_count=0 00:27:28.195 15:45:58 -- host/discovery.sh@75 -- # notify_id=2 00:27:28.195 15:45:58 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:27:28.195 15:45:58 -- common/autotest_common.sh@904 -- # return 0 00:27:28.195 15:45:58 -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:28.195 15:45:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:28.195 15:45:58 -- common/autotest_common.sh@10 -- # set +x 00:27:28.195 [2024-04-26 15:45:58.228748] bdev_nvme.c:6901:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:28.195 [2024-04-26 15:45:58.228807] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:28.195 15:45:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:28.195 15:45:58 -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:28.195 15:45:58 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:28.195 15:45:58 -- common/autotest_common.sh@901 -- # local max=10 00:27:28.195 15:45:58 -- common/autotest_common.sh@902 -- # (( max-- )) 00:27:28.195 15:45:58 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:28.195 15:45:58 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:27:28.195 15:45:58 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:28.195 15:45:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:28.195 15:45:58 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:28.195 15:45:58 -- common/autotest_common.sh@10 -- # set +x 00:27:28.195 15:45:58 -- host/discovery.sh@59 -- # xargs 00:27:28.195 [2024-04-26 15:45:58.238458] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.195 [2024-04-26 15:45:58.238495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.195 [2024-04-26 15:45:58.238510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.195 [2024-04-26 15:45:58.238519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.195 [2024-04-26 15:45:58.238530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.195 [2024-04-26 15:45:58.238539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.195 [2024-04-26 15:45:58.238549] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:28.195 [2024-04-26 15:45:58.238558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:28.195 [2024-04-26 15:45:58.238567] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a7a10 is same with the state(5) to be set 00:27:28.195 15:45:58 -- host/discovery.sh@59 -- # sort 00:27:28.195 [2024-04-26 15:45:58.248389] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a7a10 (9): Bad file descriptor 00:27:28.195 15:45:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:28.195 [2024-04-26 15:45:58.258410] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:28.195 [2024-04-26 15:45:58.258560] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.195 [2024-04-26 15:45:58.258613] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.195 [2024-04-26 15:45:58.258630] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7a10 with addr=10.0.0.2, port=4420 00:27:28.195 [2024-04-26 15:45:58.258642] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a7a10 is same with the state(5) to be set 00:27:28.195 [2024-04-26 15:45:58.258663] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a7a10 (9): Bad file descriptor 00:27:28.195 [2024-04-26 15:45:58.258688] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:28.195 [2024-04-26 15:45:58.258698] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:28.195 [2024-04-26 15:45:58.258710] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:28.195 [2024-04-26 15:45:58.258726] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.195 [2024-04-26 15:45:58.268484] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:28.195 [2024-04-26 15:45:58.268568] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.195 [2024-04-26 15:45:58.268615] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.195 [2024-04-26 15:45:58.268631] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7a10 with addr=10.0.0.2, port=4420 00:27:28.195 [2024-04-26 15:45:58.268642] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a7a10 is same with the state(5) to be set 00:27:28.195 [2024-04-26 15:45:58.268658] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a7a10 (9): Bad file descriptor 00:27:28.195 [2024-04-26 15:45:58.268673] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:28.195 [2024-04-26 15:45:58.268697] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:28.195 [2024-04-26 15:45:58.268715] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:28.195 [2024-04-26 15:45:58.268730] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.195 [2024-04-26 15:45:58.278537] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:28.195 [2024-04-26 15:45:58.278631] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.195 [2024-04-26 15:45:58.278678] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.195 [2024-04-26 15:45:58.278706] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7a10 with addr=10.0.0.2, port=4420 00:27:28.195 [2024-04-26 15:45:58.278717] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a7a10 is same with the state(5) to be set 00:27:28.195 [2024-04-26 15:45:58.278734] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a7a10 (9): Bad file descriptor 00:27:28.195 [2024-04-26 15:45:58.278748] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:28.195 [2024-04-26 15:45:58.278757] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:28.195 [2024-04-26 15:45:58.278767] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:28.195 [2024-04-26 15:45:58.278788] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.195 15:45:58 -- common/autotest_common.sh@903 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:28.195 15:45:58 -- common/autotest_common.sh@904 -- # return 0 00:27:28.195 15:45:58 -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:28.195 15:45:58 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:28.195 15:45:58 -- common/autotest_common.sh@901 -- # local max=10 00:27:28.195 15:45:58 -- common/autotest_common.sh@902 -- # (( max-- )) 00:27:28.195 15:45:58 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:27:28.195 [2024-04-26 15:45:58.288594] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:28.195 [2024-04-26 15:45:58.288671] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.195 [2024-04-26 15:45:58.288721] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.195 [2024-04-26 15:45:58.288737] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7a10 with addr=10.0.0.2, port=4420 00:27:28.195 [2024-04-26 15:45:58.288748] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a7a10 is same with the state(5) to be set 00:27:28.195 [2024-04-26 15:45:58.288764] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a7a10 (9): Bad file descriptor 00:27:28.195 [2024-04-26 15:45:58.288787] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:28.195 [2024-04-26 15:45:58.288796] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:28.195 [2024-04-26 15:45:58.288805] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:28.195 [2024-04-26 15:45:58.288820] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.195 15:45:58 -- common/autotest_common.sh@903 -- # get_bdev_list 00:27:28.195 15:45:58 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:28.195 15:45:58 -- host/discovery.sh@55 -- # sort 00:27:28.195 15:45:58 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:28.195 15:45:58 -- host/discovery.sh@55 -- # xargs 00:27:28.195 15:45:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:28.195 15:45:58 -- common/autotest_common.sh@10 -- # set +x 00:27:28.195 [2024-04-26 15:45:58.298642] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:28.195 [2024-04-26 15:45:58.298722] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.195 [2024-04-26 15:45:58.298766] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.195 [2024-04-26 15:45:58.298782] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7a10 with addr=10.0.0.2, port=4420 00:27:28.195 [2024-04-26 15:45:58.298792] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a7a10 is same with the state(5) to be set 00:27:28.195 [2024-04-26 15:45:58.298808] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a7a10 (9): Bad file descriptor 00:27:28.195 [2024-04-26 15:45:58.298823] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:28.195 [2024-04-26 15:45:58.298832] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:28.195 [2024-04-26 15:45:58.298841] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:28.195 [2024-04-26 15:45:58.298855] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.195 [2024-04-26 15:45:58.308694] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:28.195 [2024-04-26 15:45:58.308787] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.195 [2024-04-26 15:45:58.308837] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:28.195 [2024-04-26 15:45:58.308854] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7a7a10 with addr=10.0.0.2, port=4420 00:27:28.195 [2024-04-26 15:45:58.308865] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7a7a10 is same with the state(5) to be set 00:27:28.195 [2024-04-26 15:45:58.308882] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7a7a10 (9): Bad file descriptor 00:27:28.195 [2024-04-26 15:45:58.308896] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:28.195 [2024-04-26 15:45:58.308906] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:27:28.195 [2024-04-26 15:45:58.308915] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:28.195 [2024-04-26 15:45:58.308930] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:28.195 [2024-04-26 15:45:58.314614] bdev_nvme.c:6706:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:27:28.195 [2024-04-26 15:45:58.314656] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:28.195 15:45:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:28.195 15:45:58 -- common/autotest_common.sh@903 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:28.195 15:45:58 -- common/autotest_common.sh@904 -- # return 0 00:27:28.195 15:45:58 -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:27:28.195 15:45:58 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:27:28.195 15:45:58 -- common/autotest_common.sh@901 -- # local max=10 00:27:28.195 15:45:58 -- common/autotest_common.sh@902 -- # (( max-- )) 00:27:28.195 15:45:58 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:27:28.195 15:45:58 -- common/autotest_common.sh@903 -- # get_subsystem_paths nvme0 00:27:28.195 15:45:58 -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:28.195 15:45:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:28.195 15:45:58 -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:28.195 15:45:58 -- host/discovery.sh@63 -- # sort -n 00:27:28.195 15:45:58 -- common/autotest_common.sh@10 -- # set +x 00:27:28.195 15:45:58 -- host/discovery.sh@63 -- # xargs 00:27:28.195 15:45:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:28.195 15:45:58 -- common/autotest_common.sh@903 -- # [[ 4421 == \4\4\2\1 ]] 00:27:28.195 15:45:58 -- common/autotest_common.sh@904 -- # return 0 00:27:28.195 15:45:58 -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:27:28.195 15:45:58 -- host/discovery.sh@79 -- # expected_count=0 00:27:28.195 15:45:58 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:28.195 15:45:58 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:28.195 15:45:58 -- common/autotest_common.sh@901 -- # local max=10 00:27:28.195 15:45:58 -- common/autotest_common.sh@902 -- # (( max-- )) 00:27:28.195 15:45:58 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:28.195 15:45:58 -- common/autotest_common.sh@903 -- # get_notification_count 00:27:28.195 15:45:58 -- host/discovery.sh@74 -- # jq '. | length' 00:27:28.195 15:45:58 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:28.195 15:45:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:28.195 15:45:58 -- common/autotest_common.sh@10 -- # set +x 00:27:28.195 15:45:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:28.195 15:45:58 -- host/discovery.sh@74 -- # notification_count=0 00:27:28.195 15:45:58 -- host/discovery.sh@75 -- # notify_id=2 00:27:28.195 15:45:58 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:27:28.195 15:45:58 -- common/autotest_common.sh@904 -- # return 0 00:27:28.195 15:45:58 -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:27:28.195 15:45:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:28.195 15:45:58 -- common/autotest_common.sh@10 -- # set +x 00:27:28.195 15:45:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:28.195 15:45:58 -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:27:28.195 15:45:58 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:27:28.195 15:45:58 -- common/autotest_common.sh@901 -- # local max=10 00:27:28.195 15:45:58 -- common/autotest_common.sh@902 -- # (( max-- )) 00:27:28.195 15:45:58 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:27:28.195 15:45:58 -- common/autotest_common.sh@903 -- # get_subsystem_names 00:27:28.195 15:45:58 -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:28.195 15:45:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:28.195 15:45:58 -- common/autotest_common.sh@10 -- # set +x 00:27:28.195 15:45:58 -- host/discovery.sh@59 -- # jq -r '.[].name' 00:27:28.195 15:45:58 -- host/discovery.sh@59 -- # sort 00:27:28.195 15:45:58 -- host/discovery.sh@59 -- # xargs 00:27:28.195 15:45:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:28.454 15:45:58 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:27:28.454 15:45:58 -- common/autotest_common.sh@904 -- # return 0 00:27:28.454 15:45:58 -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:27:28.454 15:45:58 -- common/autotest_common.sh@900 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:27:28.454 15:45:58 -- common/autotest_common.sh@901 -- # local max=10 00:27:28.454 15:45:58 -- common/autotest_common.sh@902 -- # (( max-- )) 00:27:28.454 15:45:58 -- common/autotest_common.sh@903 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:27:28.454 15:45:58 -- common/autotest_common.sh@903 -- # get_bdev_list 00:27:28.454 15:45:58 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:28.454 15:45:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:28.454 15:45:58 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:28.454 15:45:58 -- common/autotest_common.sh@10 -- # set +x 00:27:28.455 15:45:58 -- host/discovery.sh@55 -- # sort 00:27:28.455 15:45:58 -- host/discovery.sh@55 -- # xargs 00:27:28.455 15:45:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:28.455 15:45:58 -- common/autotest_common.sh@903 -- # [[ '' == '' ]] 00:27:28.455 15:45:58 -- common/autotest_common.sh@904 -- # return 0 00:27:28.455 15:45:58 -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:27:28.455 15:45:58 -- host/discovery.sh@79 -- # expected_count=2 00:27:28.455 15:45:58 -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:28.455 15:45:58 -- common/autotest_common.sh@900 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:28.455 15:45:58 -- common/autotest_common.sh@901 -- # local max=10 00:27:28.455 15:45:58 -- common/autotest_common.sh@902 -- # (( max-- )) 00:27:28.455 15:45:58 -- common/autotest_common.sh@903 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:28.455 15:45:58 -- common/autotest_common.sh@903 -- # get_notification_count 00:27:28.455 15:45:58 -- host/discovery.sh@74 -- # jq '. | length' 00:27:28.455 15:45:58 -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:28.455 15:45:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:28.455 15:45:58 -- common/autotest_common.sh@10 -- # set +x 00:27:28.455 15:45:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:28.455 15:45:58 -- host/discovery.sh@74 -- # notification_count=2 00:27:28.455 15:45:58 -- host/discovery.sh@75 -- # notify_id=4 00:27:28.455 15:45:58 -- common/autotest_common.sh@903 -- # (( notification_count == expected_count )) 00:27:28.455 15:45:58 -- common/autotest_common.sh@904 -- # return 0 00:27:28.455 15:45:58 -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:28.455 15:45:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:28.455 15:45:58 -- common/autotest_common.sh@10 -- # set +x 00:27:29.404 [2024-04-26 15:45:59.669288] bdev_nvme.c:6919:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:29.404 [2024-04-26 15:45:59.669349] bdev_nvme.c:6999:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:29.404 [2024-04-26 15:45:59.669372] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:29.663 [2024-04-26 15:45:59.756481] bdev_nvme.c:6848:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:27:29.663 [2024-04-26 15:45:59.823307] bdev_nvme.c:6738:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:29.663 [2024-04-26 15:45:59.823380] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:29.663 15:45:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:29.663 15:45:59 -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:29.663 15:45:59 -- common/autotest_common.sh@638 -- # local es=0 00:27:29.663 15:45:59 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:29.663 15:45:59 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:27:29.663 15:45:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:29.663 15:45:59 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:27:29.663 15:45:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:29.663 15:45:59 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:29.663 15:45:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:29.663 15:45:59 -- common/autotest_common.sh@10 -- # set +x 00:27:29.663 2024/04/26 15:45:59 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:27:29.663 request: 00:27:29.663 { 00:27:29.663 "method": "bdev_nvme_start_discovery", 00:27:29.663 "params": { 00:27:29.663 "name": "nvme", 00:27:29.663 "trtype": "tcp", 00:27:29.663 "traddr": "10.0.0.2", 00:27:29.663 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:29.663 "adrfam": "ipv4", 00:27:29.663 "trsvcid": "8009", 00:27:29.663 "wait_for_attach": true 00:27:29.663 } 00:27:29.663 } 00:27:29.663 Got JSON-RPC error response 00:27:29.663 GoRPCClient: error on JSON-RPC call 00:27:29.663 15:45:59 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:27:29.663 15:45:59 -- common/autotest_common.sh@641 -- # es=1 00:27:29.663 15:45:59 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:27:29.663 15:45:59 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:27:29.663 15:45:59 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:27:29.663 15:45:59 -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:27:29.663 15:45:59 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:29.663 15:45:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:29.663 15:45:59 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:29.663 15:45:59 -- common/autotest_common.sh@10 -- # set +x 00:27:29.663 15:45:59 -- host/discovery.sh@67 -- # sort 00:27:29.663 15:45:59 -- host/discovery.sh@67 -- # xargs 00:27:29.663 15:45:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:29.663 15:45:59 -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:27:29.663 15:45:59 -- host/discovery.sh@146 -- # get_bdev_list 00:27:29.663 15:45:59 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:29.663 15:45:59 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:29.663 15:45:59 -- host/discovery.sh@55 -- # sort 00:27:29.663 15:45:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:29.663 15:45:59 -- common/autotest_common.sh@10 -- # set +x 00:27:29.663 15:45:59 -- host/discovery.sh@55 -- # xargs 00:27:29.663 15:45:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:29.663 15:45:59 -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:29.663 15:45:59 -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:29.663 15:45:59 -- common/autotest_common.sh@638 -- # local es=0 00:27:29.663 15:45:59 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:29.663 15:45:59 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:27:29.663 15:45:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:29.663 15:45:59 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:27:29.663 15:45:59 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:29.663 15:45:59 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:29.663 15:45:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:29.663 15:45:59 -- common/autotest_common.sh@10 -- # set +x 00:27:29.923 2024/04/26 15:45:59 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8009 trtype:tcp wait_for_attach:%!s(bool=true)], err: error received for bdev_nvme_start_discovery method, err: Code=-17 Msg=File exists 00:27:29.923 request: 00:27:29.923 { 00:27:29.923 "method": "bdev_nvme_start_discovery", 00:27:29.923 "params": { 00:27:29.923 "name": "nvme_second", 00:27:29.923 "trtype": "tcp", 00:27:29.923 "traddr": "10.0.0.2", 00:27:29.923 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:29.923 "adrfam": "ipv4", 00:27:29.923 "trsvcid": "8009", 00:27:29.923 "wait_for_attach": true 00:27:29.923 } 00:27:29.923 } 00:27:29.923 Got JSON-RPC error response 00:27:29.923 GoRPCClient: error on JSON-RPC call 00:27:29.923 15:45:59 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:27:29.923 15:45:59 -- common/autotest_common.sh@641 -- # es=1 00:27:29.923 15:45:59 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:27:29.923 15:45:59 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:27:29.923 15:45:59 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:27:29.923 15:45:59 -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:27:29.923 15:45:59 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:29.923 15:45:59 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:29.923 15:45:59 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:29.923 15:45:59 -- host/discovery.sh@67 -- # xargs 00:27:29.923 15:45:59 -- common/autotest_common.sh@10 -- # set +x 00:27:29.923 15:45:59 -- host/discovery.sh@67 -- # sort 00:27:29.923 15:45:59 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:29.923 15:46:00 -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:27:29.923 15:46:00 -- host/discovery.sh@152 -- # get_bdev_list 00:27:29.923 15:46:00 -- host/discovery.sh@55 -- # jq -r '.[].name' 00:27:29.923 15:46:00 -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:29.923 15:46:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:29.923 15:46:00 -- common/autotest_common.sh@10 -- # set +x 00:27:29.923 15:46:00 -- host/discovery.sh@55 -- # sort 00:27:29.923 15:46:00 -- host/discovery.sh@55 -- # xargs 00:27:29.923 15:46:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:29.923 15:46:00 -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:29.923 15:46:00 -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:29.923 15:46:00 -- common/autotest_common.sh@638 -- # local es=0 00:27:29.923 15:46:00 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:29.923 15:46:00 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:27:29.923 15:46:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:29.923 15:46:00 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:27:29.923 15:46:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:27:29.923 15:46:00 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:29.923 15:46:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:29.923 15:46:00 -- common/autotest_common.sh@10 -- # set +x 00:27:30.857 [2024-04-26 15:46:01.092068] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.857 [2024-04-26 15:46:01.092219] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.857 [2024-04-26 15:46:01.092250] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8038d0 with addr=10.0.0.2, port=8010 00:27:30.857 [2024-04-26 15:46:01.092282] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:30.857 [2024-04-26 15:46:01.092302] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:30.857 [2024-04-26 15:46:01.092318] bdev_nvme.c:6981:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:27:32.228 [2024-04-26 15:46:02.092040] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.228 [2024-04-26 15:46:02.092194] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:32.228 [2024-04-26 15:46:02.092225] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x83f520 with addr=10.0.0.2, port=8010 00:27:32.228 [2024-04-26 15:46:02.092258] nvme_tcp.c:2699:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:32.228 [2024-04-26 15:46:02.092275] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:32.228 [2024-04-26 15:46:02.092292] bdev_nvme.c:6981:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:27:33.163 [2024-04-26 15:46:03.091876] bdev_nvme.c:6962:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:27:33.163 2024/04/26 15:46:03 error on JSON-RPC call, method: bdev_nvme_start_discovery, params: map[adrfam:ipv4 attach_timeout_ms:3000 hostnqn:nqn.2021-12.io.spdk:test name:nvme_second traddr:10.0.0.2 trsvcid:8010 trtype:tcp], err: error received for bdev_nvme_start_discovery method, err: Code=-110 Msg=Connection timed out 00:27:33.163 request: 00:27:33.163 { 00:27:33.163 "method": "bdev_nvme_start_discovery", 00:27:33.163 "params": { 00:27:33.163 "name": "nvme_second", 00:27:33.163 "trtype": "tcp", 00:27:33.163 "traddr": "10.0.0.2", 00:27:33.163 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:33.163 "adrfam": "ipv4", 00:27:33.163 "trsvcid": "8010", 00:27:33.163 "attach_timeout_ms": 3000 00:27:33.163 } 00:27:33.163 } 00:27:33.163 Got JSON-RPC error response 00:27:33.163 GoRPCClient: error on JSON-RPC call 00:27:33.163 15:46:03 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:27:33.163 15:46:03 -- common/autotest_common.sh@641 -- # es=1 00:27:33.163 15:46:03 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:27:33.163 15:46:03 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:27:33.163 15:46:03 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:27:33.163 15:46:03 -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:27:33.163 15:46:03 -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:33.163 15:46:03 -- host/discovery.sh@67 -- # jq -r '.[].name' 00:27:33.163 15:46:03 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:33.163 15:46:03 -- common/autotest_common.sh@10 -- # set +x 00:27:33.163 15:46:03 -- host/discovery.sh@67 -- # sort 00:27:33.163 15:46:03 -- host/discovery.sh@67 -- # xargs 00:27:33.163 15:46:03 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:33.163 15:46:03 -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:27:33.163 15:46:03 -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:27:33.163 15:46:03 -- host/discovery.sh@161 -- # kill 82266 00:27:33.163 15:46:03 -- host/discovery.sh@162 -- # nvmftestfini 00:27:33.163 15:46:03 -- nvmf/common.sh@477 -- # nvmfcleanup 00:27:33.163 15:46:03 -- nvmf/common.sh@117 -- # sync 00:27:33.163 15:46:03 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:33.163 15:46:03 -- nvmf/common.sh@120 -- # set +e 00:27:33.163 15:46:03 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:33.163 15:46:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:33.163 rmmod nvme_tcp 00:27:33.164 rmmod nvme_fabrics 00:27:33.164 rmmod nvme_keyring 00:27:33.164 15:46:03 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:33.164 15:46:03 -- nvmf/common.sh@124 -- # set -e 00:27:33.164 15:46:03 -- nvmf/common.sh@125 -- # return 0 00:27:33.164 15:46:03 -- nvmf/common.sh@478 -- # '[' -n 82216 ']' 00:27:33.164 15:46:03 -- nvmf/common.sh@479 -- # killprocess 82216 00:27:33.164 15:46:03 -- common/autotest_common.sh@936 -- # '[' -z 82216 ']' 00:27:33.164 15:46:03 -- common/autotest_common.sh@940 -- # kill -0 82216 00:27:33.164 15:46:03 -- common/autotest_common.sh@941 -- # uname 00:27:33.164 15:46:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:33.164 15:46:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82216 00:27:33.164 killing process with pid 82216 00:27:33.164 15:46:03 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:27:33.164 15:46:03 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:27:33.164 15:46:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82216' 00:27:33.164 15:46:03 -- common/autotest_common.sh@955 -- # kill 82216 00:27:33.164 15:46:03 -- common/autotest_common.sh@960 -- # wait 82216 00:27:33.421 15:46:03 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:27:33.421 15:46:03 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:27:33.422 15:46:03 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:27:33.422 15:46:03 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:33.422 15:46:03 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:33.422 15:46:03 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:33.422 15:46:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:33.422 15:46:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:33.422 15:46:03 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:27:33.422 00:27:33.422 real 0m11.309s 00:27:33.422 user 0m22.120s 00:27:33.422 sys 0m1.785s 00:27:33.422 ************************************ 00:27:33.422 END TEST nvmf_discovery 00:27:33.422 ************************************ 00:27:33.422 15:46:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:33.422 15:46:03 -- common/autotest_common.sh@10 -- # set +x 00:27:33.422 15:46:03 -- nvmf/nvmf.sh@100 -- # run_test nvmf_discovery_remove_ifc /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:33.422 15:46:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:33.422 15:46:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:33.422 15:46:03 -- common/autotest_common.sh@10 -- # set +x 00:27:33.680 ************************************ 00:27:33.680 START TEST nvmf_discovery_remove_ifc 00:27:33.680 ************************************ 00:27:33.680 15:46:03 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:33.680 * Looking for test storage... 00:27:33.680 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:33.680 15:46:03 -- host/discovery_remove_ifc.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:33.680 15:46:03 -- nvmf/common.sh@7 -- # uname -s 00:27:33.680 15:46:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:33.680 15:46:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:33.680 15:46:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:33.680 15:46:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:33.680 15:46:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:33.680 15:46:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:33.680 15:46:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:33.680 15:46:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:33.680 15:46:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:33.680 15:46:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:33.680 15:46:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:27:33.680 15:46:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:27:33.680 15:46:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:33.680 15:46:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:33.680 15:46:03 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:33.680 15:46:03 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:33.680 15:46:03 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:33.680 15:46:03 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:33.680 15:46:03 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:33.680 15:46:03 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:33.680 15:46:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.680 15:46:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.680 15:46:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.680 15:46:03 -- paths/export.sh@5 -- # export PATH 00:27:33.680 15:46:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.680 15:46:03 -- nvmf/common.sh@47 -- # : 0 00:27:33.680 15:46:03 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:33.680 15:46:03 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:33.680 15:46:03 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:33.680 15:46:03 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:33.680 15:46:03 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:33.680 15:46:03 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:33.680 15:46:03 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:33.680 15:46:03 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:33.680 15:46:03 -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:27:33.680 15:46:03 -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:27:33.680 15:46:03 -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:27:33.680 15:46:03 -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:27:33.680 15:46:03 -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:27:33.680 15:46:03 -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:27:33.680 15:46:03 -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:27:33.680 15:46:03 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:27:33.680 15:46:03 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:33.680 15:46:03 -- nvmf/common.sh@437 -- # prepare_net_devs 00:27:33.680 15:46:03 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:27:33.680 15:46:03 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:27:33.680 15:46:03 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:33.680 15:46:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:33.680 15:46:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:33.680 15:46:03 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:27:33.680 15:46:03 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:27:33.680 15:46:03 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:27:33.680 15:46:03 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:27:33.680 15:46:03 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:27:33.680 15:46:03 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:27:33.680 15:46:03 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:33.680 15:46:03 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:33.680 15:46:03 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:33.680 15:46:03 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:27:33.680 15:46:03 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:33.680 15:46:03 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:33.680 15:46:03 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:33.680 15:46:03 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:33.680 15:46:03 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:33.680 15:46:03 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:33.680 15:46:03 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:33.680 15:46:03 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:33.680 15:46:03 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:27:33.680 15:46:03 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:27:33.680 Cannot find device "nvmf_tgt_br" 00:27:33.680 15:46:03 -- nvmf/common.sh@155 -- # true 00:27:33.680 15:46:03 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:27:33.680 Cannot find device "nvmf_tgt_br2" 00:27:33.680 15:46:03 -- nvmf/common.sh@156 -- # true 00:27:33.680 15:46:03 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:27:33.680 15:46:03 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:27:33.680 Cannot find device "nvmf_tgt_br" 00:27:33.680 15:46:03 -- nvmf/common.sh@158 -- # true 00:27:33.680 15:46:03 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:27:33.680 Cannot find device "nvmf_tgt_br2" 00:27:33.680 15:46:03 -- nvmf/common.sh@159 -- # true 00:27:33.680 15:46:03 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:27:33.938 15:46:03 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:27:33.938 15:46:04 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:33.938 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:33.938 15:46:04 -- nvmf/common.sh@162 -- # true 00:27:33.938 15:46:04 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:33.938 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:33.938 15:46:04 -- nvmf/common.sh@163 -- # true 00:27:33.938 15:46:04 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:27:33.938 15:46:04 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:33.938 15:46:04 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:33.938 15:46:04 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:33.938 15:46:04 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:33.938 15:46:04 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:33.938 15:46:04 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:33.938 15:46:04 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:33.938 15:46:04 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:33.938 15:46:04 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:27:33.938 15:46:04 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:27:33.938 15:46:04 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:27:33.938 15:46:04 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:27:33.938 15:46:04 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:33.938 15:46:04 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:33.938 15:46:04 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:33.938 15:46:04 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:27:33.938 15:46:04 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:27:33.938 15:46:04 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:27:33.938 15:46:04 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:33.938 15:46:04 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:33.938 15:46:04 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:33.938 15:46:04 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:33.939 15:46:04 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:27:33.939 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:33.939 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.105 ms 00:27:33.939 00:27:33.939 --- 10.0.0.2 ping statistics --- 00:27:33.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:33.939 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:27:33.939 15:46:04 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:27:33.939 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:33.939 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.069 ms 00:27:33.939 00:27:33.939 --- 10.0.0.3 ping statistics --- 00:27:33.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:33.939 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:27:33.939 15:46:04 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:33.939 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:33.939 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:27:33.939 00:27:33.939 --- 10.0.0.1 ping statistics --- 00:27:33.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:33.939 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:27:33.939 15:46:04 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:33.939 15:46:04 -- nvmf/common.sh@422 -- # return 0 00:27:33.939 15:46:04 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:27:33.939 15:46:04 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:33.939 15:46:04 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:27:33.939 15:46:04 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:27:33.939 15:46:04 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:33.939 15:46:04 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:27:33.939 15:46:04 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:27:34.196 15:46:04 -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:27:34.196 15:46:04 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:27:34.196 15:46:04 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:34.196 15:46:04 -- common/autotest_common.sh@10 -- # set +x 00:27:34.196 15:46:04 -- nvmf/common.sh@470 -- # nvmfpid=82756 00:27:34.196 15:46:04 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:34.197 15:46:04 -- nvmf/common.sh@471 -- # waitforlisten 82756 00:27:34.197 15:46:04 -- common/autotest_common.sh@817 -- # '[' -z 82756 ']' 00:27:34.197 15:46:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:34.197 15:46:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:34.197 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:34.197 15:46:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:34.197 15:46:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:34.197 15:46:04 -- common/autotest_common.sh@10 -- # set +x 00:27:34.197 [2024-04-26 15:46:04.316922] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:27:34.197 [2024-04-26 15:46:04.317345] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:34.197 [2024-04-26 15:46:04.470245] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:34.454 [2024-04-26 15:46:04.606901] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:34.454 [2024-04-26 15:46:04.606976] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:34.454 [2024-04-26 15:46:04.606991] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:34.454 [2024-04-26 15:46:04.607001] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:34.454 [2024-04-26 15:46:04.607020] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:34.454 [2024-04-26 15:46:04.607067] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:27:35.387 15:46:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:35.387 15:46:05 -- common/autotest_common.sh@850 -- # return 0 00:27:35.387 15:46:05 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:27:35.387 15:46:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:35.387 15:46:05 -- common/autotest_common.sh@10 -- # set +x 00:27:35.387 15:46:05 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:35.387 15:46:05 -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:27:35.387 15:46:05 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:35.387 15:46:05 -- common/autotest_common.sh@10 -- # set +x 00:27:35.387 [2024-04-26 15:46:05.395334] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:35.387 [2024-04-26 15:46:05.403493] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:35.387 null0 00:27:35.387 [2024-04-26 15:46:05.435430] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:35.387 15:46:05 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:35.387 15:46:05 -- host/discovery_remove_ifc.sh@59 -- # hostpid=82812 00:27:35.387 15:46:05 -- host/discovery_remove_ifc.sh@58 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:27:35.387 15:46:05 -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 82812 /tmp/host.sock 00:27:35.387 15:46:05 -- common/autotest_common.sh@817 -- # '[' -z 82812 ']' 00:27:35.387 15:46:05 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:27:35.387 15:46:05 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:35.387 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:35.387 15:46:05 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:35.387 15:46:05 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:35.387 15:46:05 -- common/autotest_common.sh@10 -- # set +x 00:27:35.387 [2024-04-26 15:46:05.517851] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:27:35.387 [2024-04-26 15:46:05.518191] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82812 ] 00:27:35.387 [2024-04-26 15:46:05.656516] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:35.644 [2024-04-26 15:46:05.789672] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:27:36.579 15:46:06 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:36.579 15:46:06 -- common/autotest_common.sh@850 -- # return 0 00:27:36.579 15:46:06 -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:36.579 15:46:06 -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:27:36.579 15:46:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:36.579 15:46:06 -- common/autotest_common.sh@10 -- # set +x 00:27:36.579 15:46:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:36.579 15:46:06 -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:27:36.579 15:46:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:36.579 15:46:06 -- common/autotest_common.sh@10 -- # set +x 00:27:36.579 15:46:06 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:36.579 15:46:06 -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:27:36.579 15:46:06 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:36.579 15:46:06 -- common/autotest_common.sh@10 -- # set +x 00:27:37.527 [2024-04-26 15:46:07.698709] bdev_nvme.c:6919:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:37.527 [2024-04-26 15:46:07.698751] bdev_nvme.c:6999:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:37.527 [2024-04-26 15:46:07.698771] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:37.527 [2024-04-26 15:46:07.784909] bdev_nvme.c:6848:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:27:37.801 [2024-04-26 15:46:07.841175] bdev_nvme.c:7709:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:37.801 [2024-04-26 15:46:07.841259] bdev_nvme.c:7709:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:37.801 [2024-04-26 15:46:07.841288] bdev_nvme.c:7709:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:37.801 [2024-04-26 15:46:07.841305] bdev_nvme.c:6738:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:37.801 [2024-04-26 15:46:07.841332] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:37.801 15:46:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:37.801 15:46:07 -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:27:37.801 15:46:07 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:37.801 [2024-04-26 15:46:07.847056] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x231d930 was disconnected and freed. delete nvme_qpair. 00:27:37.801 15:46:07 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:37.801 15:46:07 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:37.801 15:46:07 -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:37.801 15:46:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:37.801 15:46:07 -- common/autotest_common.sh@10 -- # set +x 00:27:37.801 15:46:07 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:37.801 15:46:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:37.801 15:46:07 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:27:37.801 15:46:07 -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_tgt_ns_spdk ip addr del 10.0.0.2/24 dev nvmf_tgt_if 00:27:37.801 15:46:07 -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if down 00:27:37.801 15:46:07 -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:27:37.801 15:46:07 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:37.801 15:46:07 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:37.801 15:46:07 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:37.801 15:46:07 -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:37.801 15:46:07 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:37.801 15:46:07 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:37.801 15:46:07 -- common/autotest_common.sh@10 -- # set +x 00:27:37.801 15:46:07 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:37.801 15:46:07 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:37.801 15:46:07 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:38.754 15:46:08 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:38.754 15:46:08 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:38.754 15:46:08 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:38.754 15:46:08 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:38.754 15:46:08 -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:38.754 15:46:08 -- common/autotest_common.sh@10 -- # set +x 00:27:38.754 15:46:08 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:38.754 15:46:09 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:38.754 15:46:09 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:38.754 15:46:09 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:40.129 15:46:10 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:40.129 15:46:10 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:40.129 15:46:10 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:40.129 15:46:10 -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:40.129 15:46:10 -- common/autotest_common.sh@10 -- # set +x 00:27:40.129 15:46:10 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:40.129 15:46:10 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:40.129 15:46:10 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:40.129 15:46:10 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:40.129 15:46:10 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:41.129 15:46:11 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:41.129 15:46:11 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:41.129 15:46:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:41.129 15:46:11 -- common/autotest_common.sh@10 -- # set +x 00:27:41.129 15:46:11 -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:41.129 15:46:11 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:41.129 15:46:11 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:41.129 15:46:11 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:41.129 15:46:11 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:41.129 15:46:11 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:42.074 15:46:12 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:42.074 15:46:12 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:42.074 15:46:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:42.074 15:46:12 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:42.074 15:46:12 -- common/autotest_common.sh@10 -- # set +x 00:27:42.074 15:46:12 -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:42.074 15:46:12 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:42.074 15:46:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:42.074 15:46:12 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:42.074 15:46:12 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:43.010 15:46:13 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:43.010 15:46:13 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:43.010 15:46:13 -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:43.010 15:46:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:43.010 15:46:13 -- common/autotest_common.sh@10 -- # set +x 00:27:43.010 15:46:13 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:43.010 15:46:13 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:43.010 15:46:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:43.011 [2024-04-26 15:46:13.268963] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:27:43.011 [2024-04-26 15:46:13.269046] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:43.011 [2024-04-26 15:46:13.269063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.011 [2024-04-26 15:46:13.269076] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:43.011 [2024-04-26 15:46:13.269086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.011 [2024-04-26 15:46:13.269096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:43.011 [2024-04-26 15:46:13.269105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.011 [2024-04-26 15:46:13.269116] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:43.011 [2024-04-26 15:46:13.269125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.011 [2024-04-26 15:46:13.269145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:43.011 [2024-04-26 15:46:13.269155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:43.011 [2024-04-26 15:46:13.269165] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22875f0 is same with the state(5) to be set 00:27:43.011 [2024-04-26 15:46:13.278959] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22875f0 (9): Bad file descriptor 00:27:43.011 15:46:13 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:43.011 15:46:13 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:43.011 [2024-04-26 15:46:13.288978] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:27:44.384 15:46:14 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:44.384 15:46:14 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:44.384 15:46:14 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:44.384 15:46:14 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:44.384 15:46:14 -- common/autotest_common.sh@10 -- # set +x 00:27:44.384 15:46:14 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:44.384 15:46:14 -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:44.384 [2024-04-26 15:46:14.339281] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:27:45.318 [2024-04-26 15:46:15.363292] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:27:45.318 [2024-04-26 15:46:15.363796] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x22875f0 with addr=10.0.0.2, port=4420 00:27:45.318 [2024-04-26 15:46:15.363863] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22875f0 is same with the state(5) to be set 00:27:45.318 [2024-04-26 15:46:15.364815] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22875f0 (9): Bad file descriptor 00:27:45.318 [2024-04-26 15:46:15.364927] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:45.318 [2024-04-26 15:46:15.364986] bdev_nvme.c:6670:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:27:45.318 [2024-04-26 15:46:15.365067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:45.318 [2024-04-26 15:46:15.365130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:45.318 [2024-04-26 15:46:15.365186] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:45.318 [2024-04-26 15:46:15.365208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:45.318 [2024-04-26 15:46:15.365230] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:45.318 [2024-04-26 15:46:15.365250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:45.318 [2024-04-26 15:46:15.365273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:45.318 [2024-04-26 15:46:15.365293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:45.318 [2024-04-26 15:46:15.365316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:45.318 [2024-04-26 15:46:15.365336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:45.318 [2024-04-26 15:46:15.365357] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:27:45.318 [2024-04-26 15:46:15.365392] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2286470 (9): Bad file descriptor 00:27:45.318 [2024-04-26 15:46:15.365967] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:27:45.318 [2024-04-26 15:46:15.366010] nvme_ctrlr.c:1148:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:27:45.318 15:46:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:45.318 15:46:15 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:45.318 15:46:15 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:46.257 15:46:16 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:46.257 15:46:16 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:46.257 15:46:16 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:46.257 15:46:16 -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:46.257 15:46:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:46.257 15:46:16 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:46.257 15:46:16 -- common/autotest_common.sh@10 -- # set +x 00:27:46.257 15:46:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:46.257 15:46:16 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:27:46.257 15:46:16 -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:46.257 15:46:16 -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:46.257 15:46:16 -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:27:46.257 15:46:16 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:46.257 15:46:16 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:46.257 15:46:16 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:46.257 15:46:16 -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:46.257 15:46:16 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:46.257 15:46:16 -- common/autotest_common.sh@10 -- # set +x 00:27:46.257 15:46:16 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:46.257 15:46:16 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:46.257 15:46:16 -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:46.257 15:46:16 -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:47.190 [2024-04-26 15:46:17.370630] bdev_nvme.c:6919:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:47.190 [2024-04-26 15:46:17.370679] bdev_nvme.c:6999:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:47.190 [2024-04-26 15:46:17.370699] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:47.191 [2024-04-26 15:46:17.456770] bdev_nvme.c:6848:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:27:47.450 [2024-04-26 15:46:17.512051] bdev_nvme.c:7709:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:47.450 [2024-04-26 15:46:17.512119] bdev_nvme.c:7709:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:47.450 [2024-04-26 15:46:17.512158] bdev_nvme.c:7709:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:47.450 [2024-04-26 15:46:17.512176] bdev_nvme.c:6738:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:27:47.450 [2024-04-26 15:46:17.512186] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:47.450 [2024-04-26 15:46:17.519160] bdev_nvme.c:1605:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x23013c0 was disconnected and freed. delete nvme_qpair. 00:27:47.450 15:46:17 -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:47.450 15:46:17 -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:47.450 15:46:17 -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:47.450 15:46:17 -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:47.450 15:46:17 -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:47.450 15:46:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:47.450 15:46:17 -- common/autotest_common.sh@10 -- # set +x 00:27:47.450 15:46:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:47.450 15:46:17 -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:27:47.450 15:46:17 -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:27:47.450 15:46:17 -- host/discovery_remove_ifc.sh@90 -- # killprocess 82812 00:27:47.450 15:46:17 -- common/autotest_common.sh@936 -- # '[' -z 82812 ']' 00:27:47.450 15:46:17 -- common/autotest_common.sh@940 -- # kill -0 82812 00:27:47.450 15:46:17 -- common/autotest_common.sh@941 -- # uname 00:27:47.450 15:46:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:47.450 15:46:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82812 00:27:47.450 killing process with pid 82812 00:27:47.450 15:46:17 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:27:47.450 15:46:17 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:27:47.450 15:46:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82812' 00:27:47.450 15:46:17 -- common/autotest_common.sh@955 -- # kill 82812 00:27:47.450 15:46:17 -- common/autotest_common.sh@960 -- # wait 82812 00:27:47.715 15:46:17 -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:27:47.715 15:46:17 -- nvmf/common.sh@477 -- # nvmfcleanup 00:27:47.715 15:46:17 -- nvmf/common.sh@117 -- # sync 00:27:47.715 15:46:17 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:47.715 15:46:17 -- nvmf/common.sh@120 -- # set +e 00:27:47.715 15:46:17 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:47.715 15:46:17 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:47.715 rmmod nvme_tcp 00:27:47.715 rmmod nvme_fabrics 00:27:47.715 rmmod nvme_keyring 00:27:47.715 15:46:17 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:47.715 15:46:17 -- nvmf/common.sh@124 -- # set -e 00:27:47.715 15:46:17 -- nvmf/common.sh@125 -- # return 0 00:27:47.715 15:46:17 -- nvmf/common.sh@478 -- # '[' -n 82756 ']' 00:27:47.715 15:46:17 -- nvmf/common.sh@479 -- # killprocess 82756 00:27:47.715 15:46:17 -- common/autotest_common.sh@936 -- # '[' -z 82756 ']' 00:27:47.715 15:46:17 -- common/autotest_common.sh@940 -- # kill -0 82756 00:27:47.715 15:46:17 -- common/autotest_common.sh@941 -- # uname 00:27:47.715 15:46:17 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:27:47.715 15:46:17 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 82756 00:27:47.715 killing process with pid 82756 00:27:47.715 15:46:17 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:27:47.715 15:46:17 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:27:47.715 15:46:17 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 82756' 00:27:47.715 15:46:17 -- common/autotest_common.sh@955 -- # kill 82756 00:27:47.715 15:46:17 -- common/autotest_common.sh@960 -- # wait 82756 00:27:47.973 15:46:18 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:27:47.973 15:46:18 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:27:47.973 15:46:18 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:27:47.973 15:46:18 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:47.973 15:46:18 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:47.973 15:46:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:47.973 15:46:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:47.973 15:46:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:47.973 15:46:18 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:27:47.973 00:27:47.973 real 0m14.510s 00:27:47.973 user 0m24.879s 00:27:47.973 sys 0m1.713s 00:27:47.973 15:46:18 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:47.973 15:46:18 -- common/autotest_common.sh@10 -- # set +x 00:27:47.974 ************************************ 00:27:47.974 END TEST nvmf_discovery_remove_ifc 00:27:47.974 ************************************ 00:27:48.232 15:46:18 -- nvmf/nvmf.sh@101 -- # run_test nvmf_identify_kernel_target /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:48.233 15:46:18 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:48.233 15:46:18 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:48.233 15:46:18 -- common/autotest_common.sh@10 -- # set +x 00:27:48.233 ************************************ 00:27:48.233 START TEST nvmf_identify_kernel_target 00:27:48.233 ************************************ 00:27:48.233 15:46:18 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:48.233 * Looking for test storage... 00:27:48.233 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:48.233 15:46:18 -- host/identify_kernel_nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:48.233 15:46:18 -- nvmf/common.sh@7 -- # uname -s 00:27:48.233 15:46:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:48.233 15:46:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:48.233 15:46:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:48.233 15:46:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:48.233 15:46:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:48.233 15:46:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:48.233 15:46:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:48.233 15:46:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:48.233 15:46:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:48.233 15:46:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:48.233 15:46:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:27:48.233 15:46:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:27:48.233 15:46:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:48.233 15:46:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:48.233 15:46:18 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:48.233 15:46:18 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:48.233 15:46:18 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:48.233 15:46:18 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:48.233 15:46:18 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:48.233 15:46:18 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:48.233 15:46:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:48.233 15:46:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:48.233 15:46:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:48.233 15:46:18 -- paths/export.sh@5 -- # export PATH 00:27:48.233 15:46:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:48.233 15:46:18 -- nvmf/common.sh@47 -- # : 0 00:27:48.233 15:46:18 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:48.233 15:46:18 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:48.233 15:46:18 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:48.233 15:46:18 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:48.233 15:46:18 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:48.233 15:46:18 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:48.233 15:46:18 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:48.233 15:46:18 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:48.233 15:46:18 -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:27:48.233 15:46:18 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:27:48.233 15:46:18 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:48.233 15:46:18 -- nvmf/common.sh@437 -- # prepare_net_devs 00:27:48.233 15:46:18 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:27:48.233 15:46:18 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:27:48.233 15:46:18 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:48.233 15:46:18 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:48.233 15:46:18 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:48.233 15:46:18 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:27:48.233 15:46:18 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:27:48.233 15:46:18 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:27:48.233 15:46:18 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:27:48.233 15:46:18 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:27:48.233 15:46:18 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:27:48.233 15:46:18 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:48.233 15:46:18 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:48.233 15:46:18 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:48.233 15:46:18 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:27:48.233 15:46:18 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:48.233 15:46:18 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:48.233 15:46:18 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:48.233 15:46:18 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:48.233 15:46:18 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:48.233 15:46:18 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:48.233 15:46:18 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:48.233 15:46:18 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:48.233 15:46:18 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:27:48.233 15:46:18 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:27:48.491 Cannot find device "nvmf_tgt_br" 00:27:48.491 15:46:18 -- nvmf/common.sh@155 -- # true 00:27:48.491 15:46:18 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:27:48.491 Cannot find device "nvmf_tgt_br2" 00:27:48.491 15:46:18 -- nvmf/common.sh@156 -- # true 00:27:48.491 15:46:18 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:27:48.491 15:46:18 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:27:48.491 Cannot find device "nvmf_tgt_br" 00:27:48.491 15:46:18 -- nvmf/common.sh@158 -- # true 00:27:48.491 15:46:18 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:27:48.491 Cannot find device "nvmf_tgt_br2" 00:27:48.491 15:46:18 -- nvmf/common.sh@159 -- # true 00:27:48.491 15:46:18 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:27:48.491 15:46:18 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:27:48.491 15:46:18 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:48.491 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:48.491 15:46:18 -- nvmf/common.sh@162 -- # true 00:27:48.491 15:46:18 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:48.491 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:48.491 15:46:18 -- nvmf/common.sh@163 -- # true 00:27:48.491 15:46:18 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:27:48.491 15:46:18 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:48.492 15:46:18 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:48.492 15:46:18 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:48.492 15:46:18 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:48.492 15:46:18 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:48.492 15:46:18 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:48.492 15:46:18 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:48.492 15:46:18 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:48.492 15:46:18 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:27:48.492 15:46:18 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:27:48.492 15:46:18 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:27:48.492 15:46:18 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:27:48.492 15:46:18 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:48.492 15:46:18 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:48.492 15:46:18 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:48.492 15:46:18 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:27:48.492 15:46:18 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:27:48.492 15:46:18 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:27:48.492 15:46:18 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:48.492 15:46:18 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:48.750 15:46:18 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:48.750 15:46:18 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:48.750 15:46:18 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:27:48.750 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:48.750 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.057 ms 00:27:48.750 00:27:48.750 --- 10.0.0.2 ping statistics --- 00:27:48.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:48.750 rtt min/avg/max/mdev = 0.057/0.057/0.057/0.000 ms 00:27:48.750 15:46:18 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:27:48.750 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:48.750 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.062 ms 00:27:48.750 00:27:48.750 --- 10.0.0.3 ping statistics --- 00:27:48.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:48.750 rtt min/avg/max/mdev = 0.062/0.062/0.062/0.000 ms 00:27:48.750 15:46:18 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:48.750 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:48.750 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:27:48.750 00:27:48.750 --- 10.0.0.1 ping statistics --- 00:27:48.750 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:48.750 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:27:48.750 15:46:18 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:48.750 15:46:18 -- nvmf/common.sh@422 -- # return 0 00:27:48.750 15:46:18 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:27:48.750 15:46:18 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:48.750 15:46:18 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:27:48.750 15:46:18 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:27:48.750 15:46:18 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:48.750 15:46:18 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:27:48.750 15:46:18 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:27:48.750 15:46:18 -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:27:48.750 15:46:18 -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:27:48.750 15:46:18 -- nvmf/common.sh@717 -- # local ip 00:27:48.750 15:46:18 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:48.750 15:46:18 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:48.750 15:46:18 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:48.750 15:46:18 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:48.750 15:46:18 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:48.750 15:46:18 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:48.750 15:46:18 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:48.750 15:46:18 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:48.750 15:46:18 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:48.750 15:46:18 -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:27:48.750 15:46:18 -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:48.750 15:46:18 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:48.750 15:46:18 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:27:48.750 15:46:18 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:48.750 15:46:18 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:48.750 15:46:18 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:48.750 15:46:18 -- nvmf/common.sh@628 -- # local block nvme 00:27:48.750 15:46:18 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:27:48.750 15:46:18 -- nvmf/common.sh@631 -- # modprobe nvmet 00:27:48.750 15:46:18 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:48.750 15:46:18 -- nvmf/common.sh@636 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:49.008 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:49.008 Waiting for block devices as requested 00:27:49.008 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:27:49.266 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:27:49.266 15:46:19 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:27:49.266 15:46:19 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:49.266 15:46:19 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:27:49.266 15:46:19 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:27:49.266 15:46:19 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:49.266 15:46:19 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:49.266 15:46:19 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:27:49.266 15:46:19 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:49.266 15:46:19 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:27:49.266 No valid GPT data, bailing 00:27:49.266 15:46:19 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:49.266 15:46:19 -- scripts/common.sh@391 -- # pt= 00:27:49.266 15:46:19 -- scripts/common.sh@392 -- # return 1 00:27:49.266 15:46:19 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:27:49.266 15:46:19 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:27:49.266 15:46:19 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n2 ]] 00:27:49.266 15:46:19 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n2 00:27:49.266 15:46:19 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:27:49.266 15:46:19 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:27:49.266 15:46:19 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:49.266 15:46:19 -- nvmf/common.sh@642 -- # block_in_use nvme0n2 00:27:49.266 15:46:19 -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:27:49.266 15:46:19 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:27:49.523 No valid GPT data, bailing 00:27:49.524 15:46:19 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:27:49.524 15:46:19 -- scripts/common.sh@391 -- # pt= 00:27:49.524 15:46:19 -- scripts/common.sh@392 -- # return 1 00:27:49.524 15:46:19 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n2 00:27:49.524 15:46:19 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:27:49.524 15:46:19 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n3 ]] 00:27:49.524 15:46:19 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n3 00:27:49.524 15:46:19 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:27:49.524 15:46:19 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:27:49.524 15:46:19 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:49.524 15:46:19 -- nvmf/common.sh@642 -- # block_in_use nvme0n3 00:27:49.524 15:46:19 -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:27:49.524 15:46:19 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:27:49.524 No valid GPT data, bailing 00:27:49.524 15:46:19 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:27:49.524 15:46:19 -- scripts/common.sh@391 -- # pt= 00:27:49.524 15:46:19 -- scripts/common.sh@392 -- # return 1 00:27:49.524 15:46:19 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n3 00:27:49.524 15:46:19 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:27:49.524 15:46:19 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme1n1 ]] 00:27:49.524 15:46:19 -- nvmf/common.sh@641 -- # is_block_zoned nvme1n1 00:27:49.524 15:46:19 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:27:49.524 15:46:19 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:27:49.524 15:46:19 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:49.524 15:46:19 -- nvmf/common.sh@642 -- # block_in_use nvme1n1 00:27:49.524 15:46:19 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:27:49.524 15:46:19 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:27:49.524 No valid GPT data, bailing 00:27:49.524 15:46:19 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:27:49.524 15:46:19 -- scripts/common.sh@391 -- # pt= 00:27:49.524 15:46:19 -- scripts/common.sh@392 -- # return 1 00:27:49.524 15:46:19 -- nvmf/common.sh@642 -- # nvme=/dev/nvme1n1 00:27:49.524 15:46:19 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme1n1 ]] 00:27:49.524 15:46:19 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:49.524 15:46:19 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:49.524 15:46:19 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:49.524 15:46:19 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:49.524 15:46:19 -- nvmf/common.sh@656 -- # echo 1 00:27:49.524 15:46:19 -- nvmf/common.sh@657 -- # echo /dev/nvme1n1 00:27:49.524 15:46:19 -- nvmf/common.sh@658 -- # echo 1 00:27:49.524 15:46:19 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:27:49.524 15:46:19 -- nvmf/common.sh@661 -- # echo tcp 00:27:49.524 15:46:19 -- nvmf/common.sh@662 -- # echo 4420 00:27:49.524 15:46:19 -- nvmf/common.sh@663 -- # echo ipv4 00:27:49.524 15:46:19 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:49.524 15:46:19 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 --hostid=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 -a 10.0.0.1 -t tcp -s 4420 00:27:49.524 00:27:49.524 Discovery Log Number of Records 2, Generation counter 2 00:27:49.524 =====Discovery Log Entry 0====== 00:27:49.524 trtype: tcp 00:27:49.524 adrfam: ipv4 00:27:49.524 subtype: current discovery subsystem 00:27:49.524 treq: not specified, sq flow control disable supported 00:27:49.524 portid: 1 00:27:49.524 trsvcid: 4420 00:27:49.524 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:49.524 traddr: 10.0.0.1 00:27:49.524 eflags: none 00:27:49.524 sectype: none 00:27:49.524 =====Discovery Log Entry 1====== 00:27:49.524 trtype: tcp 00:27:49.524 adrfam: ipv4 00:27:49.524 subtype: nvme subsystem 00:27:49.524 treq: not specified, sq flow control disable supported 00:27:49.524 portid: 1 00:27:49.524 trsvcid: 4420 00:27:49.524 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:49.524 traddr: 10.0.0.1 00:27:49.524 eflags: none 00:27:49.524 sectype: none 00:27:49.524 15:46:19 -- host/identify_kernel_nvmf.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:27:49.524 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:27:49.782 ===================================================== 00:27:49.783 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:27:49.783 ===================================================== 00:27:49.783 Controller Capabilities/Features 00:27:49.783 ================================ 00:27:49.783 Vendor ID: 0000 00:27:49.783 Subsystem Vendor ID: 0000 00:27:49.783 Serial Number: 0d2dbc67245cf8a6fc56 00:27:49.783 Model Number: Linux 00:27:49.783 Firmware Version: 6.7.0-68 00:27:49.783 Recommended Arb Burst: 0 00:27:49.783 IEEE OUI Identifier: 00 00 00 00:27:49.783 Multi-path I/O 00:27:49.783 May have multiple subsystem ports: No 00:27:49.783 May have multiple controllers: No 00:27:49.783 Associated with SR-IOV VF: No 00:27:49.783 Max Data Transfer Size: Unlimited 00:27:49.783 Max Number of Namespaces: 0 00:27:49.783 Max Number of I/O Queues: 1024 00:27:49.783 NVMe Specification Version (VS): 1.3 00:27:49.783 NVMe Specification Version (Identify): 1.3 00:27:49.783 Maximum Queue Entries: 1024 00:27:49.783 Contiguous Queues Required: No 00:27:49.783 Arbitration Mechanisms Supported 00:27:49.783 Weighted Round Robin: Not Supported 00:27:49.783 Vendor Specific: Not Supported 00:27:49.783 Reset Timeout: 7500 ms 00:27:49.783 Doorbell Stride: 4 bytes 00:27:49.783 NVM Subsystem Reset: Not Supported 00:27:49.783 Command Sets Supported 00:27:49.783 NVM Command Set: Supported 00:27:49.783 Boot Partition: Not Supported 00:27:49.783 Memory Page Size Minimum: 4096 bytes 00:27:49.783 Memory Page Size Maximum: 4096 bytes 00:27:49.783 Persistent Memory Region: Not Supported 00:27:49.783 Optional Asynchronous Events Supported 00:27:49.783 Namespace Attribute Notices: Not Supported 00:27:49.783 Firmware Activation Notices: Not Supported 00:27:49.783 ANA Change Notices: Not Supported 00:27:49.783 PLE Aggregate Log Change Notices: Not Supported 00:27:49.783 LBA Status Info Alert Notices: Not Supported 00:27:49.783 EGE Aggregate Log Change Notices: Not Supported 00:27:49.783 Normal NVM Subsystem Shutdown event: Not Supported 00:27:49.783 Zone Descriptor Change Notices: Not Supported 00:27:49.783 Discovery Log Change Notices: Supported 00:27:49.783 Controller Attributes 00:27:49.783 128-bit Host Identifier: Not Supported 00:27:49.783 Non-Operational Permissive Mode: Not Supported 00:27:49.783 NVM Sets: Not Supported 00:27:49.783 Read Recovery Levels: Not Supported 00:27:49.783 Endurance Groups: Not Supported 00:27:49.783 Predictable Latency Mode: Not Supported 00:27:49.783 Traffic Based Keep ALive: Not Supported 00:27:49.783 Namespace Granularity: Not Supported 00:27:49.783 SQ Associations: Not Supported 00:27:49.783 UUID List: Not Supported 00:27:49.783 Multi-Domain Subsystem: Not Supported 00:27:49.783 Fixed Capacity Management: Not Supported 00:27:49.783 Variable Capacity Management: Not Supported 00:27:49.783 Delete Endurance Group: Not Supported 00:27:49.783 Delete NVM Set: Not Supported 00:27:49.783 Extended LBA Formats Supported: Not Supported 00:27:49.783 Flexible Data Placement Supported: Not Supported 00:27:49.783 00:27:49.783 Controller Memory Buffer Support 00:27:49.783 ================================ 00:27:49.783 Supported: No 00:27:49.783 00:27:49.783 Persistent Memory Region Support 00:27:49.783 ================================ 00:27:49.783 Supported: No 00:27:49.783 00:27:49.783 Admin Command Set Attributes 00:27:49.783 ============================ 00:27:49.783 Security Send/Receive: Not Supported 00:27:49.783 Format NVM: Not Supported 00:27:49.783 Firmware Activate/Download: Not Supported 00:27:49.783 Namespace Management: Not Supported 00:27:49.783 Device Self-Test: Not Supported 00:27:49.783 Directives: Not Supported 00:27:49.783 NVMe-MI: Not Supported 00:27:49.783 Virtualization Management: Not Supported 00:27:49.783 Doorbell Buffer Config: Not Supported 00:27:49.783 Get LBA Status Capability: Not Supported 00:27:49.783 Command & Feature Lockdown Capability: Not Supported 00:27:49.783 Abort Command Limit: 1 00:27:49.783 Async Event Request Limit: 1 00:27:49.783 Number of Firmware Slots: N/A 00:27:49.783 Firmware Slot 1 Read-Only: N/A 00:27:49.783 Firmware Activation Without Reset: N/A 00:27:49.783 Multiple Update Detection Support: N/A 00:27:49.783 Firmware Update Granularity: No Information Provided 00:27:49.783 Per-Namespace SMART Log: No 00:27:49.783 Asymmetric Namespace Access Log Page: Not Supported 00:27:49.783 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:27:49.783 Command Effects Log Page: Not Supported 00:27:49.783 Get Log Page Extended Data: Supported 00:27:49.783 Telemetry Log Pages: Not Supported 00:27:49.783 Persistent Event Log Pages: Not Supported 00:27:49.783 Supported Log Pages Log Page: May Support 00:27:49.783 Commands Supported & Effects Log Page: Not Supported 00:27:49.783 Feature Identifiers & Effects Log Page:May Support 00:27:49.783 NVMe-MI Commands & Effects Log Page: May Support 00:27:49.783 Data Area 4 for Telemetry Log: Not Supported 00:27:49.783 Error Log Page Entries Supported: 1 00:27:49.783 Keep Alive: Not Supported 00:27:49.783 00:27:49.783 NVM Command Set Attributes 00:27:49.783 ========================== 00:27:49.783 Submission Queue Entry Size 00:27:49.783 Max: 1 00:27:49.783 Min: 1 00:27:49.783 Completion Queue Entry Size 00:27:49.783 Max: 1 00:27:49.783 Min: 1 00:27:49.783 Number of Namespaces: 0 00:27:49.783 Compare Command: Not Supported 00:27:49.783 Write Uncorrectable Command: Not Supported 00:27:49.783 Dataset Management Command: Not Supported 00:27:49.783 Write Zeroes Command: Not Supported 00:27:49.783 Set Features Save Field: Not Supported 00:27:49.783 Reservations: Not Supported 00:27:49.783 Timestamp: Not Supported 00:27:49.783 Copy: Not Supported 00:27:49.783 Volatile Write Cache: Not Present 00:27:49.783 Atomic Write Unit (Normal): 1 00:27:49.783 Atomic Write Unit (PFail): 1 00:27:49.783 Atomic Compare & Write Unit: 1 00:27:49.783 Fused Compare & Write: Not Supported 00:27:49.783 Scatter-Gather List 00:27:49.783 SGL Command Set: Supported 00:27:49.783 SGL Keyed: Not Supported 00:27:49.783 SGL Bit Bucket Descriptor: Not Supported 00:27:49.783 SGL Metadata Pointer: Not Supported 00:27:49.783 Oversized SGL: Not Supported 00:27:49.783 SGL Metadata Address: Not Supported 00:27:49.783 SGL Offset: Supported 00:27:49.783 Transport SGL Data Block: Not Supported 00:27:49.783 Replay Protected Memory Block: Not Supported 00:27:49.783 00:27:49.783 Firmware Slot Information 00:27:49.783 ========================= 00:27:49.783 Active slot: 0 00:27:49.783 00:27:49.783 00:27:49.783 Error Log 00:27:49.783 ========= 00:27:49.783 00:27:49.783 Active Namespaces 00:27:49.783 ================= 00:27:49.783 Discovery Log Page 00:27:49.783 ================== 00:27:49.783 Generation Counter: 2 00:27:49.783 Number of Records: 2 00:27:49.783 Record Format: 0 00:27:49.783 00:27:49.783 Discovery Log Entry 0 00:27:49.783 ---------------------- 00:27:49.783 Transport Type: 3 (TCP) 00:27:49.783 Address Family: 1 (IPv4) 00:27:49.783 Subsystem Type: 3 (Current Discovery Subsystem) 00:27:49.783 Entry Flags: 00:27:49.783 Duplicate Returned Information: 0 00:27:49.783 Explicit Persistent Connection Support for Discovery: 0 00:27:49.783 Transport Requirements: 00:27:49.783 Secure Channel: Not Specified 00:27:49.783 Port ID: 1 (0x0001) 00:27:49.783 Controller ID: 65535 (0xffff) 00:27:49.783 Admin Max SQ Size: 32 00:27:49.783 Transport Service Identifier: 4420 00:27:49.783 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:27:49.783 Transport Address: 10.0.0.1 00:27:49.783 Discovery Log Entry 1 00:27:49.783 ---------------------- 00:27:49.783 Transport Type: 3 (TCP) 00:27:49.783 Address Family: 1 (IPv4) 00:27:49.783 Subsystem Type: 2 (NVM Subsystem) 00:27:49.783 Entry Flags: 00:27:49.783 Duplicate Returned Information: 0 00:27:49.783 Explicit Persistent Connection Support for Discovery: 0 00:27:49.783 Transport Requirements: 00:27:49.783 Secure Channel: Not Specified 00:27:49.783 Port ID: 1 (0x0001) 00:27:49.783 Controller ID: 65535 (0xffff) 00:27:49.783 Admin Max SQ Size: 32 00:27:49.783 Transport Service Identifier: 4420 00:27:49.783 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:27:49.783 Transport Address: 10.0.0.1 00:27:49.783 15:46:20 -- host/identify_kernel_nvmf.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:50.042 get_feature(0x01) failed 00:27:50.042 get_feature(0x02) failed 00:27:50.042 get_feature(0x04) failed 00:27:50.042 ===================================================== 00:27:50.042 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:50.042 ===================================================== 00:27:50.042 Controller Capabilities/Features 00:27:50.042 ================================ 00:27:50.042 Vendor ID: 0000 00:27:50.042 Subsystem Vendor ID: 0000 00:27:50.042 Serial Number: 4367b8688fd9552f4a46 00:27:50.042 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:27:50.042 Firmware Version: 6.7.0-68 00:27:50.042 Recommended Arb Burst: 6 00:27:50.042 IEEE OUI Identifier: 00 00 00 00:27:50.042 Multi-path I/O 00:27:50.042 May have multiple subsystem ports: Yes 00:27:50.042 May have multiple controllers: Yes 00:27:50.042 Associated with SR-IOV VF: No 00:27:50.042 Max Data Transfer Size: Unlimited 00:27:50.042 Max Number of Namespaces: 1024 00:27:50.042 Max Number of I/O Queues: 128 00:27:50.042 NVMe Specification Version (VS): 1.3 00:27:50.042 NVMe Specification Version (Identify): 1.3 00:27:50.042 Maximum Queue Entries: 1024 00:27:50.042 Contiguous Queues Required: No 00:27:50.042 Arbitration Mechanisms Supported 00:27:50.042 Weighted Round Robin: Not Supported 00:27:50.042 Vendor Specific: Not Supported 00:27:50.042 Reset Timeout: 7500 ms 00:27:50.042 Doorbell Stride: 4 bytes 00:27:50.042 NVM Subsystem Reset: Not Supported 00:27:50.042 Command Sets Supported 00:27:50.042 NVM Command Set: Supported 00:27:50.042 Boot Partition: Not Supported 00:27:50.042 Memory Page Size Minimum: 4096 bytes 00:27:50.042 Memory Page Size Maximum: 4096 bytes 00:27:50.042 Persistent Memory Region: Not Supported 00:27:50.043 Optional Asynchronous Events Supported 00:27:50.043 Namespace Attribute Notices: Supported 00:27:50.043 Firmware Activation Notices: Not Supported 00:27:50.043 ANA Change Notices: Supported 00:27:50.043 PLE Aggregate Log Change Notices: Not Supported 00:27:50.043 LBA Status Info Alert Notices: Not Supported 00:27:50.043 EGE Aggregate Log Change Notices: Not Supported 00:27:50.043 Normal NVM Subsystem Shutdown event: Not Supported 00:27:50.043 Zone Descriptor Change Notices: Not Supported 00:27:50.043 Discovery Log Change Notices: Not Supported 00:27:50.043 Controller Attributes 00:27:50.043 128-bit Host Identifier: Supported 00:27:50.043 Non-Operational Permissive Mode: Not Supported 00:27:50.043 NVM Sets: Not Supported 00:27:50.043 Read Recovery Levels: Not Supported 00:27:50.043 Endurance Groups: Not Supported 00:27:50.043 Predictable Latency Mode: Not Supported 00:27:50.043 Traffic Based Keep ALive: Supported 00:27:50.043 Namespace Granularity: Not Supported 00:27:50.043 SQ Associations: Not Supported 00:27:50.043 UUID List: Not Supported 00:27:50.043 Multi-Domain Subsystem: Not Supported 00:27:50.043 Fixed Capacity Management: Not Supported 00:27:50.043 Variable Capacity Management: Not Supported 00:27:50.043 Delete Endurance Group: Not Supported 00:27:50.043 Delete NVM Set: Not Supported 00:27:50.043 Extended LBA Formats Supported: Not Supported 00:27:50.043 Flexible Data Placement Supported: Not Supported 00:27:50.043 00:27:50.043 Controller Memory Buffer Support 00:27:50.043 ================================ 00:27:50.043 Supported: No 00:27:50.043 00:27:50.043 Persistent Memory Region Support 00:27:50.043 ================================ 00:27:50.043 Supported: No 00:27:50.043 00:27:50.043 Admin Command Set Attributes 00:27:50.043 ============================ 00:27:50.043 Security Send/Receive: Not Supported 00:27:50.043 Format NVM: Not Supported 00:27:50.043 Firmware Activate/Download: Not Supported 00:27:50.043 Namespace Management: Not Supported 00:27:50.043 Device Self-Test: Not Supported 00:27:50.043 Directives: Not Supported 00:27:50.043 NVMe-MI: Not Supported 00:27:50.043 Virtualization Management: Not Supported 00:27:50.043 Doorbell Buffer Config: Not Supported 00:27:50.043 Get LBA Status Capability: Not Supported 00:27:50.043 Command & Feature Lockdown Capability: Not Supported 00:27:50.043 Abort Command Limit: 4 00:27:50.043 Async Event Request Limit: 4 00:27:50.043 Number of Firmware Slots: N/A 00:27:50.043 Firmware Slot 1 Read-Only: N/A 00:27:50.043 Firmware Activation Without Reset: N/A 00:27:50.043 Multiple Update Detection Support: N/A 00:27:50.043 Firmware Update Granularity: No Information Provided 00:27:50.043 Per-Namespace SMART Log: Yes 00:27:50.043 Asymmetric Namespace Access Log Page: Supported 00:27:50.043 ANA Transition Time : 10 sec 00:27:50.043 00:27:50.043 Asymmetric Namespace Access Capabilities 00:27:50.043 ANA Optimized State : Supported 00:27:50.043 ANA Non-Optimized State : Supported 00:27:50.043 ANA Inaccessible State : Supported 00:27:50.043 ANA Persistent Loss State : Supported 00:27:50.043 ANA Change State : Supported 00:27:50.043 ANAGRPID is not changed : No 00:27:50.043 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:27:50.043 00:27:50.043 ANA Group Identifier Maximum : 128 00:27:50.043 Number of ANA Group Identifiers : 128 00:27:50.043 Max Number of Allowed Namespaces : 1024 00:27:50.043 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:27:50.043 Command Effects Log Page: Supported 00:27:50.043 Get Log Page Extended Data: Supported 00:27:50.043 Telemetry Log Pages: Not Supported 00:27:50.043 Persistent Event Log Pages: Not Supported 00:27:50.043 Supported Log Pages Log Page: May Support 00:27:50.043 Commands Supported & Effects Log Page: Not Supported 00:27:50.043 Feature Identifiers & Effects Log Page:May Support 00:27:50.043 NVMe-MI Commands & Effects Log Page: May Support 00:27:50.043 Data Area 4 for Telemetry Log: Not Supported 00:27:50.043 Error Log Page Entries Supported: 128 00:27:50.043 Keep Alive: Supported 00:27:50.043 Keep Alive Granularity: 1000 ms 00:27:50.043 00:27:50.043 NVM Command Set Attributes 00:27:50.043 ========================== 00:27:50.043 Submission Queue Entry Size 00:27:50.043 Max: 64 00:27:50.043 Min: 64 00:27:50.043 Completion Queue Entry Size 00:27:50.043 Max: 16 00:27:50.043 Min: 16 00:27:50.043 Number of Namespaces: 1024 00:27:50.043 Compare Command: Not Supported 00:27:50.043 Write Uncorrectable Command: Not Supported 00:27:50.043 Dataset Management Command: Supported 00:27:50.043 Write Zeroes Command: Supported 00:27:50.043 Set Features Save Field: Not Supported 00:27:50.043 Reservations: Not Supported 00:27:50.043 Timestamp: Not Supported 00:27:50.043 Copy: Not Supported 00:27:50.043 Volatile Write Cache: Present 00:27:50.043 Atomic Write Unit (Normal): 1 00:27:50.043 Atomic Write Unit (PFail): 1 00:27:50.043 Atomic Compare & Write Unit: 1 00:27:50.043 Fused Compare & Write: Not Supported 00:27:50.043 Scatter-Gather List 00:27:50.043 SGL Command Set: Supported 00:27:50.043 SGL Keyed: Not Supported 00:27:50.043 SGL Bit Bucket Descriptor: Not Supported 00:27:50.043 SGL Metadata Pointer: Not Supported 00:27:50.043 Oversized SGL: Not Supported 00:27:50.043 SGL Metadata Address: Not Supported 00:27:50.043 SGL Offset: Supported 00:27:50.043 Transport SGL Data Block: Not Supported 00:27:50.043 Replay Protected Memory Block: Not Supported 00:27:50.043 00:27:50.043 Firmware Slot Information 00:27:50.043 ========================= 00:27:50.043 Active slot: 0 00:27:50.043 00:27:50.043 Asymmetric Namespace Access 00:27:50.043 =========================== 00:27:50.043 Change Count : 0 00:27:50.043 Number of ANA Group Descriptors : 1 00:27:50.043 ANA Group Descriptor : 0 00:27:50.043 ANA Group ID : 1 00:27:50.043 Number of NSID Values : 1 00:27:50.043 Change Count : 0 00:27:50.043 ANA State : 1 00:27:50.043 Namespace Identifier : 1 00:27:50.043 00:27:50.043 Commands Supported and Effects 00:27:50.043 ============================== 00:27:50.043 Admin Commands 00:27:50.043 -------------- 00:27:50.043 Get Log Page (02h): Supported 00:27:50.043 Identify (06h): Supported 00:27:50.043 Abort (08h): Supported 00:27:50.043 Set Features (09h): Supported 00:27:50.043 Get Features (0Ah): Supported 00:27:50.043 Asynchronous Event Request (0Ch): Supported 00:27:50.043 Keep Alive (18h): Supported 00:27:50.043 I/O Commands 00:27:50.043 ------------ 00:27:50.043 Flush (00h): Supported 00:27:50.043 Write (01h): Supported LBA-Change 00:27:50.043 Read (02h): Supported 00:27:50.043 Write Zeroes (08h): Supported LBA-Change 00:27:50.043 Dataset Management (09h): Supported 00:27:50.043 00:27:50.043 Error Log 00:27:50.043 ========= 00:27:50.043 Entry: 0 00:27:50.043 Error Count: 0x3 00:27:50.043 Submission Queue Id: 0x0 00:27:50.043 Command Id: 0x5 00:27:50.043 Phase Bit: 0 00:27:50.043 Status Code: 0x2 00:27:50.043 Status Code Type: 0x0 00:27:50.043 Do Not Retry: 1 00:27:50.043 Error Location: 0x28 00:27:50.043 LBA: 0x0 00:27:50.043 Namespace: 0x0 00:27:50.043 Vendor Log Page: 0x0 00:27:50.043 ----------- 00:27:50.043 Entry: 1 00:27:50.043 Error Count: 0x2 00:27:50.043 Submission Queue Id: 0x0 00:27:50.043 Command Id: 0x5 00:27:50.043 Phase Bit: 0 00:27:50.043 Status Code: 0x2 00:27:50.043 Status Code Type: 0x0 00:27:50.043 Do Not Retry: 1 00:27:50.043 Error Location: 0x28 00:27:50.043 LBA: 0x0 00:27:50.043 Namespace: 0x0 00:27:50.043 Vendor Log Page: 0x0 00:27:50.043 ----------- 00:27:50.043 Entry: 2 00:27:50.043 Error Count: 0x1 00:27:50.043 Submission Queue Id: 0x0 00:27:50.043 Command Id: 0x4 00:27:50.043 Phase Bit: 0 00:27:50.043 Status Code: 0x2 00:27:50.043 Status Code Type: 0x0 00:27:50.043 Do Not Retry: 1 00:27:50.043 Error Location: 0x28 00:27:50.043 LBA: 0x0 00:27:50.043 Namespace: 0x0 00:27:50.043 Vendor Log Page: 0x0 00:27:50.043 00:27:50.043 Number of Queues 00:27:50.043 ================ 00:27:50.043 Number of I/O Submission Queues: 128 00:27:50.043 Number of I/O Completion Queues: 128 00:27:50.043 00:27:50.043 ZNS Specific Controller Data 00:27:50.043 ============================ 00:27:50.043 Zone Append Size Limit: 0 00:27:50.043 00:27:50.043 00:27:50.043 Active Namespaces 00:27:50.043 ================= 00:27:50.043 get_feature(0x05) failed 00:27:50.043 Namespace ID:1 00:27:50.043 Command Set Identifier: NVM (00h) 00:27:50.043 Deallocate: Supported 00:27:50.043 Deallocated/Unwritten Error: Not Supported 00:27:50.043 Deallocated Read Value: Unknown 00:27:50.043 Deallocate in Write Zeroes: Not Supported 00:27:50.043 Deallocated Guard Field: 0xFFFF 00:27:50.044 Flush: Supported 00:27:50.044 Reservation: Not Supported 00:27:50.044 Namespace Sharing Capabilities: Multiple Controllers 00:27:50.044 Size (in LBAs): 1310720 (5GiB) 00:27:50.044 Capacity (in LBAs): 1310720 (5GiB) 00:27:50.044 Utilization (in LBAs): 1310720 (5GiB) 00:27:50.044 UUID: 7ac4676b-359c-42cc-9f83-7ca6753081b7 00:27:50.044 Thin Provisioning: Not Supported 00:27:50.044 Per-NS Atomic Units: Yes 00:27:50.044 Atomic Boundary Size (Normal): 0 00:27:50.044 Atomic Boundary Size (PFail): 0 00:27:50.044 Atomic Boundary Offset: 0 00:27:50.044 NGUID/EUI64 Never Reused: No 00:27:50.044 ANA group ID: 1 00:27:50.044 Namespace Write Protected: No 00:27:50.044 Number of LBA Formats: 1 00:27:50.044 Current LBA Format: LBA Format #00 00:27:50.044 LBA Format #00: Data Size: 4096 Metadata Size: 0 00:27:50.044 00:27:50.044 15:46:20 -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:27:50.044 15:46:20 -- nvmf/common.sh@477 -- # nvmfcleanup 00:27:50.044 15:46:20 -- nvmf/common.sh@117 -- # sync 00:27:50.044 15:46:20 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:50.044 15:46:20 -- nvmf/common.sh@120 -- # set +e 00:27:50.044 15:46:20 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:50.044 15:46:20 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:50.044 rmmod nvme_tcp 00:27:50.044 rmmod nvme_fabrics 00:27:50.044 15:46:20 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:50.044 15:46:20 -- nvmf/common.sh@124 -- # set -e 00:27:50.044 15:46:20 -- nvmf/common.sh@125 -- # return 0 00:27:50.044 15:46:20 -- nvmf/common.sh@478 -- # '[' -n '' ']' 00:27:50.044 15:46:20 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:27:50.044 15:46:20 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:27:50.044 15:46:20 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:27:50.044 15:46:20 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:50.044 15:46:20 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:50.044 15:46:20 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:50.044 15:46:20 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:50.044 15:46:20 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:50.044 15:46:20 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:27:50.044 15:46:20 -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:27:50.044 15:46:20 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:50.044 15:46:20 -- nvmf/common.sh@675 -- # echo 0 00:27:50.044 15:46:20 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:50.044 15:46:20 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:50.044 15:46:20 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:50.301 15:46:20 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:50.302 15:46:20 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:27:50.302 15:46:20 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:27:50.302 15:46:20 -- nvmf/common.sh@687 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:27:50.867 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:50.867 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:27:51.124 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:27:51.124 00:27:51.124 real 0m2.857s 00:27:51.124 user 0m1.012s 00:27:51.124 sys 0m1.324s 00:27:51.125 15:46:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:27:51.125 15:46:21 -- common/autotest_common.sh@10 -- # set +x 00:27:51.125 ************************************ 00:27:51.125 END TEST nvmf_identify_kernel_target 00:27:51.125 ************************************ 00:27:51.125 15:46:21 -- nvmf/nvmf.sh@102 -- # run_test nvmf_auth /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:51.125 15:46:21 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:27:51.125 15:46:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:27:51.125 15:46:21 -- common/autotest_common.sh@10 -- # set +x 00:27:51.125 ************************************ 00:27:51.125 START TEST nvmf_auth 00:27:51.125 ************************************ 00:27:51.125 15:46:21 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/auth.sh --transport=tcp 00:27:51.383 * Looking for test storage... 00:27:51.383 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:27:51.383 15:46:21 -- host/auth.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:27:51.383 15:46:21 -- nvmf/common.sh@7 -- # uname -s 00:27:51.383 15:46:21 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:51.383 15:46:21 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:51.383 15:46:21 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:51.383 15:46:21 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:51.383 15:46:21 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:51.383 15:46:21 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:51.383 15:46:21 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:51.383 15:46:21 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:51.383 15:46:21 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:51.383 15:46:21 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:51.383 15:46:21 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:27:51.383 15:46:21 -- nvmf/common.sh@18 -- # NVME_HOSTID=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:27:51.383 15:46:21 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:51.383 15:46:21 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:51.383 15:46:21 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:27:51.383 15:46:21 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:51.383 15:46:21 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:27:51.383 15:46:21 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:51.383 15:46:21 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:51.383 15:46:21 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:51.383 15:46:21 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:51.383 15:46:21 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:51.383 15:46:21 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:51.383 15:46:21 -- paths/export.sh@5 -- # export PATH 00:27:51.383 15:46:21 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:51.383 15:46:21 -- nvmf/common.sh@47 -- # : 0 00:27:51.383 15:46:21 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:51.383 15:46:21 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:51.383 15:46:21 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:51.383 15:46:21 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:51.383 15:46:21 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:51.383 15:46:21 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:51.383 15:46:21 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:51.383 15:46:21 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:51.383 15:46:21 -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:27:51.383 15:46:21 -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:27:51.383 15:46:21 -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:27:51.383 15:46:21 -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:27:51.383 15:46:21 -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:51.383 15:46:21 -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:51.383 15:46:21 -- host/auth.sh@21 -- # keys=() 00:27:51.383 15:46:21 -- host/auth.sh@77 -- # nvmftestinit 00:27:51.383 15:46:21 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:27:51.383 15:46:21 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:51.383 15:46:21 -- nvmf/common.sh@437 -- # prepare_net_devs 00:27:51.383 15:46:21 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:27:51.383 15:46:21 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:27:51.383 15:46:21 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:51.383 15:46:21 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:51.383 15:46:21 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:51.383 15:46:21 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:27:51.383 15:46:21 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:27:51.383 15:46:21 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:27:51.383 15:46:21 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:27:51.383 15:46:21 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:27:51.383 15:46:21 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:27:51.383 15:46:21 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:51.383 15:46:21 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:51.384 15:46:21 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:27:51.384 15:46:21 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:27:51.384 15:46:21 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:27:51.384 15:46:21 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:27:51.384 15:46:21 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:27:51.384 15:46:21 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:51.384 15:46:21 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:27:51.384 15:46:21 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:27:51.384 15:46:21 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:27:51.384 15:46:21 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:27:51.384 15:46:21 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:27:51.384 15:46:21 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:27:51.384 Cannot find device "nvmf_tgt_br" 00:27:51.384 15:46:21 -- nvmf/common.sh@155 -- # true 00:27:51.384 15:46:21 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:27:51.384 Cannot find device "nvmf_tgt_br2" 00:27:51.384 15:46:21 -- nvmf/common.sh@156 -- # true 00:27:51.384 15:46:21 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:27:51.384 15:46:21 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:27:51.384 Cannot find device "nvmf_tgt_br" 00:27:51.384 15:46:21 -- nvmf/common.sh@158 -- # true 00:27:51.384 15:46:21 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:27:51.384 Cannot find device "nvmf_tgt_br2" 00:27:51.384 15:46:21 -- nvmf/common.sh@159 -- # true 00:27:51.384 15:46:21 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:27:51.384 15:46:21 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:27:51.384 15:46:21 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:27:51.384 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:51.384 15:46:21 -- nvmf/common.sh@162 -- # true 00:27:51.384 15:46:21 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:27:51.384 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:27:51.384 15:46:21 -- nvmf/common.sh@163 -- # true 00:27:51.384 15:46:21 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:27:51.384 15:46:21 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:27:51.384 15:46:21 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:27:51.384 15:46:21 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:27:51.384 15:46:21 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:27:51.384 15:46:21 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:27:51.384 15:46:21 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:27:51.384 15:46:21 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:27:51.384 15:46:21 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:27:51.384 15:46:21 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:27:51.641 15:46:21 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:27:51.641 15:46:21 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:27:51.641 15:46:21 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:27:51.641 15:46:21 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:27:51.641 15:46:21 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:27:51.641 15:46:21 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:27:51.641 15:46:21 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:27:51.641 15:46:21 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:27:51.641 15:46:21 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:27:51.641 15:46:21 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:27:51.641 15:46:21 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:27:51.641 15:46:21 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:27:51.641 15:46:21 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:27:51.641 15:46:21 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:27:51.641 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:51.641 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.090 ms 00:27:51.641 00:27:51.641 --- 10.0.0.2 ping statistics --- 00:27:51.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:51.641 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:27:51.641 15:46:21 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:27:51.641 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:27:51.641 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.053 ms 00:27:51.641 00:27:51.641 --- 10.0.0.3 ping statistics --- 00:27:51.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:51.641 rtt min/avg/max/mdev = 0.053/0.053/0.053/0.000 ms 00:27:51.641 15:46:21 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:27:51.641 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:51.641 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:27:51.641 00:27:51.641 --- 10.0.0.1 ping statistics --- 00:27:51.641 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:51.641 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:27:51.641 15:46:21 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:51.641 15:46:21 -- nvmf/common.sh@422 -- # return 0 00:27:51.641 15:46:21 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:27:51.641 15:46:21 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:51.641 15:46:21 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:27:51.641 15:46:21 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:27:51.642 15:46:21 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:51.642 15:46:21 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:27:51.642 15:46:21 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:27:51.642 15:46:21 -- host/auth.sh@78 -- # nvmfappstart -L nvme_auth 00:27:51.642 15:46:21 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:27:51.642 15:46:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:27:51.642 15:46:21 -- common/autotest_common.sh@10 -- # set +x 00:27:51.642 15:46:21 -- nvmf/common.sh@470 -- # nvmfpid=83720 00:27:51.642 15:46:21 -- nvmf/common.sh@471 -- # waitforlisten 83720 00:27:51.642 15:46:21 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:27:51.642 15:46:21 -- common/autotest_common.sh@817 -- # '[' -z 83720 ']' 00:27:51.642 15:46:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:51.642 15:46:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:51.642 15:46:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:51.642 15:46:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:51.642 15:46:21 -- common/autotest_common.sh@10 -- # set +x 00:27:53.012 15:46:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:53.012 15:46:22 -- common/autotest_common.sh@850 -- # return 0 00:27:53.012 15:46:22 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:27:53.012 15:46:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:27:53.012 15:46:22 -- common/autotest_common.sh@10 -- # set +x 00:27:53.012 15:46:22 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:53.012 15:46:22 -- host/auth.sh@79 -- # trap 'cat /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:27:53.012 15:46:22 -- host/auth.sh@81 -- # gen_key null 32 00:27:53.013 15:46:22 -- host/auth.sh@53 -- # local digest len file key 00:27:53.013 15:46:22 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:53.013 15:46:22 -- host/auth.sh@54 -- # local -A digests 00:27:53.013 15:46:22 -- host/auth.sh@56 -- # digest=null 00:27:53.013 15:46:22 -- host/auth.sh@56 -- # len=32 00:27:53.013 15:46:22 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:53.013 15:46:22 -- host/auth.sh@57 -- # key=b9af8d98258d344e000a3744936f692a 00:27:53.013 15:46:22 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:27:53.013 15:46:22 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.E1m 00:27:53.013 15:46:22 -- host/auth.sh@59 -- # format_dhchap_key b9af8d98258d344e000a3744936f692a 0 00:27:53.013 15:46:22 -- nvmf/common.sh@708 -- # format_key DHHC-1 b9af8d98258d344e000a3744936f692a 0 00:27:53.013 15:46:22 -- nvmf/common.sh@691 -- # local prefix key digest 00:27:53.013 15:46:22 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:27:53.013 15:46:22 -- nvmf/common.sh@693 -- # key=b9af8d98258d344e000a3744936f692a 00:27:53.013 15:46:22 -- nvmf/common.sh@693 -- # digest=0 00:27:53.013 15:46:22 -- nvmf/common.sh@694 -- # python - 00:27:53.013 15:46:22 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.E1m 00:27:53.013 15:46:22 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.E1m 00:27:53.013 15:46:22 -- host/auth.sh@81 -- # keys[0]=/tmp/spdk.key-null.E1m 00:27:53.013 15:46:22 -- host/auth.sh@82 -- # gen_key null 48 00:27:53.013 15:46:22 -- host/auth.sh@53 -- # local digest len file key 00:27:53.013 15:46:22 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:53.013 15:46:22 -- host/auth.sh@54 -- # local -A digests 00:27:53.013 15:46:22 -- host/auth.sh@56 -- # digest=null 00:27:53.013 15:46:22 -- host/auth.sh@56 -- # len=48 00:27:53.013 15:46:22 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:53.013 15:46:22 -- host/auth.sh@57 -- # key=ca56174c8d6d94a8bb276f6384d58aa117df32da19bc6954 00:27:53.013 15:46:23 -- host/auth.sh@58 -- # mktemp -t spdk.key-null.XXX 00:27:53.013 15:46:23 -- host/auth.sh@58 -- # file=/tmp/spdk.key-null.UEe 00:27:53.013 15:46:23 -- host/auth.sh@59 -- # format_dhchap_key ca56174c8d6d94a8bb276f6384d58aa117df32da19bc6954 0 00:27:53.013 15:46:23 -- nvmf/common.sh@708 -- # format_key DHHC-1 ca56174c8d6d94a8bb276f6384d58aa117df32da19bc6954 0 00:27:53.013 15:46:23 -- nvmf/common.sh@691 -- # local prefix key digest 00:27:53.013 15:46:23 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:27:53.013 15:46:23 -- nvmf/common.sh@693 -- # key=ca56174c8d6d94a8bb276f6384d58aa117df32da19bc6954 00:27:53.013 15:46:23 -- nvmf/common.sh@693 -- # digest=0 00:27:53.013 15:46:23 -- nvmf/common.sh@694 -- # python - 00:27:53.013 15:46:23 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-null.UEe 00:27:53.013 15:46:23 -- host/auth.sh@62 -- # echo /tmp/spdk.key-null.UEe 00:27:53.013 15:46:23 -- host/auth.sh@82 -- # keys[1]=/tmp/spdk.key-null.UEe 00:27:53.013 15:46:23 -- host/auth.sh@83 -- # gen_key sha256 32 00:27:53.013 15:46:23 -- host/auth.sh@53 -- # local digest len file key 00:27:53.013 15:46:23 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:53.013 15:46:23 -- host/auth.sh@54 -- # local -A digests 00:27:53.013 15:46:23 -- host/auth.sh@56 -- # digest=sha256 00:27:53.013 15:46:23 -- host/auth.sh@56 -- # len=32 00:27:53.013 15:46:23 -- host/auth.sh@57 -- # xxd -p -c0 -l 16 /dev/urandom 00:27:53.013 15:46:23 -- host/auth.sh@57 -- # key=fa63382c952eee11aab1db8a54148fd2 00:27:53.013 15:46:23 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha256.XXX 00:27:53.013 15:46:23 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha256.tpt 00:27:53.013 15:46:23 -- host/auth.sh@59 -- # format_dhchap_key fa63382c952eee11aab1db8a54148fd2 1 00:27:53.013 15:46:23 -- nvmf/common.sh@708 -- # format_key DHHC-1 fa63382c952eee11aab1db8a54148fd2 1 00:27:53.013 15:46:23 -- nvmf/common.sh@691 -- # local prefix key digest 00:27:53.013 15:46:23 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:27:53.013 15:46:23 -- nvmf/common.sh@693 -- # key=fa63382c952eee11aab1db8a54148fd2 00:27:53.013 15:46:23 -- nvmf/common.sh@693 -- # digest=1 00:27:53.013 15:46:23 -- nvmf/common.sh@694 -- # python - 00:27:53.013 15:46:23 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha256.tpt 00:27:53.013 15:46:23 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha256.tpt 00:27:53.013 15:46:23 -- host/auth.sh@83 -- # keys[2]=/tmp/spdk.key-sha256.tpt 00:27:53.013 15:46:23 -- host/auth.sh@84 -- # gen_key sha384 48 00:27:53.013 15:46:23 -- host/auth.sh@53 -- # local digest len file key 00:27:53.013 15:46:23 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:53.013 15:46:23 -- host/auth.sh@54 -- # local -A digests 00:27:53.013 15:46:23 -- host/auth.sh@56 -- # digest=sha384 00:27:53.013 15:46:23 -- host/auth.sh@56 -- # len=48 00:27:53.013 15:46:23 -- host/auth.sh@57 -- # xxd -p -c0 -l 24 /dev/urandom 00:27:53.013 15:46:23 -- host/auth.sh@57 -- # key=288fb2dac5f5b5cf75189374363e076133edd581436cd1b5 00:27:53.013 15:46:23 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha384.XXX 00:27:53.013 15:46:23 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha384.idD 00:27:53.013 15:46:23 -- host/auth.sh@59 -- # format_dhchap_key 288fb2dac5f5b5cf75189374363e076133edd581436cd1b5 2 00:27:53.013 15:46:23 -- nvmf/common.sh@708 -- # format_key DHHC-1 288fb2dac5f5b5cf75189374363e076133edd581436cd1b5 2 00:27:53.013 15:46:23 -- nvmf/common.sh@691 -- # local prefix key digest 00:27:53.013 15:46:23 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:27:53.013 15:46:23 -- nvmf/common.sh@693 -- # key=288fb2dac5f5b5cf75189374363e076133edd581436cd1b5 00:27:53.013 15:46:23 -- nvmf/common.sh@693 -- # digest=2 00:27:53.013 15:46:23 -- nvmf/common.sh@694 -- # python - 00:27:53.013 15:46:23 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha384.idD 00:27:53.013 15:46:23 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha384.idD 00:27:53.013 15:46:23 -- host/auth.sh@84 -- # keys[3]=/tmp/spdk.key-sha384.idD 00:27:53.013 15:46:23 -- host/auth.sh@85 -- # gen_key sha512 64 00:27:53.013 15:46:23 -- host/auth.sh@53 -- # local digest len file key 00:27:53.013 15:46:23 -- host/auth.sh@54 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:27:53.013 15:46:23 -- host/auth.sh@54 -- # local -A digests 00:27:53.013 15:46:23 -- host/auth.sh@56 -- # digest=sha512 00:27:53.013 15:46:23 -- host/auth.sh@56 -- # len=64 00:27:53.013 15:46:23 -- host/auth.sh@57 -- # xxd -p -c0 -l 32 /dev/urandom 00:27:53.013 15:46:23 -- host/auth.sh@57 -- # key=c8e16dd790e67ed00a80b6c9c7d67f456072635c16c1871a41fb350c08372732 00:27:53.013 15:46:23 -- host/auth.sh@58 -- # mktemp -t spdk.key-sha512.XXX 00:27:53.013 15:46:23 -- host/auth.sh@58 -- # file=/tmp/spdk.key-sha512.P6P 00:27:53.013 15:46:23 -- host/auth.sh@59 -- # format_dhchap_key c8e16dd790e67ed00a80b6c9c7d67f456072635c16c1871a41fb350c08372732 3 00:27:53.013 15:46:23 -- nvmf/common.sh@708 -- # format_key DHHC-1 c8e16dd790e67ed00a80b6c9c7d67f456072635c16c1871a41fb350c08372732 3 00:27:53.013 15:46:23 -- nvmf/common.sh@691 -- # local prefix key digest 00:27:53.013 15:46:23 -- nvmf/common.sh@693 -- # prefix=DHHC-1 00:27:53.013 15:46:23 -- nvmf/common.sh@693 -- # key=c8e16dd790e67ed00a80b6c9c7d67f456072635c16c1871a41fb350c08372732 00:27:53.013 15:46:23 -- nvmf/common.sh@693 -- # digest=3 00:27:53.013 15:46:23 -- nvmf/common.sh@694 -- # python - 00:27:53.013 15:46:23 -- host/auth.sh@60 -- # chmod 0600 /tmp/spdk.key-sha512.P6P 00:27:53.013 15:46:23 -- host/auth.sh@62 -- # echo /tmp/spdk.key-sha512.P6P 00:27:53.013 15:46:23 -- host/auth.sh@85 -- # keys[4]=/tmp/spdk.key-sha512.P6P 00:27:53.013 15:46:23 -- host/auth.sh@87 -- # waitforlisten 83720 00:27:53.013 15:46:23 -- common/autotest_common.sh@817 -- # '[' -z 83720 ']' 00:27:53.013 15:46:23 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:53.013 15:46:23 -- common/autotest_common.sh@822 -- # local max_retries=100 00:27:53.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:53.013 15:46:23 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:53.013 15:46:23 -- common/autotest_common.sh@826 -- # xtrace_disable 00:27:53.013 15:46:23 -- common/autotest_common.sh@10 -- # set +x 00:27:53.271 15:46:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:27:53.271 15:46:23 -- common/autotest_common.sh@850 -- # return 0 00:27:53.271 15:46:23 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:27:53.271 15:46:23 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.E1m 00:27:53.271 15:46:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:53.271 15:46:23 -- common/autotest_common.sh@10 -- # set +x 00:27:53.271 15:46:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:53.271 15:46:23 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:27:53.271 15:46:23 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.UEe 00:27:53.271 15:46:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:53.271 15:46:23 -- common/autotest_common.sh@10 -- # set +x 00:27:53.271 15:46:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:53.271 15:46:23 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:27:53.271 15:46:23 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.tpt 00:27:53.271 15:46:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:53.271 15:46:23 -- common/autotest_common.sh@10 -- # set +x 00:27:53.271 15:46:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:53.271 15:46:23 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:27:53.271 15:46:23 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.idD 00:27:53.271 15:46:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:53.271 15:46:23 -- common/autotest_common.sh@10 -- # set +x 00:27:53.271 15:46:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:53.271 15:46:23 -- host/auth.sh@88 -- # for i in "${!keys[@]}" 00:27:53.271 15:46:23 -- host/auth.sh@89 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.P6P 00:27:53.271 15:46:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:53.271 15:46:23 -- common/autotest_common.sh@10 -- # set +x 00:27:53.271 15:46:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:53.271 15:46:23 -- host/auth.sh@92 -- # nvmet_auth_init 00:27:53.271 15:46:23 -- host/auth.sh@35 -- # get_main_ns_ip 00:27:53.271 15:46:23 -- nvmf/common.sh@717 -- # local ip 00:27:53.271 15:46:23 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:53.271 15:46:23 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:53.271 15:46:23 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:53.271 15:46:23 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:53.271 15:46:23 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:53.271 15:46:23 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:53.271 15:46:23 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:53.271 15:46:23 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:53.271 15:46:23 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:53.271 15:46:23 -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:27:53.271 15:46:23 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:27:53.271 15:46:23 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:27:53.271 15:46:23 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:53.271 15:46:23 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:53.271 15:46:23 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:53.271 15:46:23 -- nvmf/common.sh@628 -- # local block nvme 00:27:53.271 15:46:23 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:27:53.271 15:46:23 -- nvmf/common.sh@631 -- # modprobe nvmet 00:27:53.528 15:46:23 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:53.528 15:46:23 -- nvmf/common.sh@636 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:27:53.785 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:27:53.785 Waiting for block devices as requested 00:27:53.785 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:27:53.785 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:27:54.415 15:46:24 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:27:54.415 15:46:24 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:54.415 15:46:24 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:27:54.415 15:46:24 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:27:54.415 15:46:24 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:54.415 15:46:24 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:54.415 15:46:24 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:27:54.415 15:46:24 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:54.415 15:46:24 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:27:54.415 No valid GPT data, bailing 00:27:54.415 15:46:24 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:54.415 15:46:24 -- scripts/common.sh@391 -- # pt= 00:27:54.415 15:46:24 -- scripts/common.sh@392 -- # return 1 00:27:54.415 15:46:24 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:27:54.415 15:46:24 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:27:54.415 15:46:24 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n2 ]] 00:27:54.415 15:46:24 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n2 00:27:54.415 15:46:24 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:27:54.415 15:46:24 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:27:54.415 15:46:24 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:54.415 15:46:24 -- nvmf/common.sh@642 -- # block_in_use nvme0n2 00:27:54.415 15:46:24 -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:27:54.415 15:46:24 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:27:54.673 No valid GPT data, bailing 00:27:54.673 15:46:24 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:27:54.673 15:46:24 -- scripts/common.sh@391 -- # pt= 00:27:54.673 15:46:24 -- scripts/common.sh@392 -- # return 1 00:27:54.673 15:46:24 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n2 00:27:54.673 15:46:24 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:27:54.673 15:46:24 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n3 ]] 00:27:54.673 15:46:24 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n3 00:27:54.673 15:46:24 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:27:54.673 15:46:24 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:27:54.673 15:46:24 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:54.673 15:46:24 -- nvmf/common.sh@642 -- # block_in_use nvme0n3 00:27:54.673 15:46:24 -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:27:54.673 15:46:24 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:27:54.673 No valid GPT data, bailing 00:27:54.673 15:46:24 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:27:54.673 15:46:24 -- scripts/common.sh@391 -- # pt= 00:27:54.673 15:46:24 -- scripts/common.sh@392 -- # return 1 00:27:54.673 15:46:24 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n3 00:27:54.673 15:46:24 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:27:54.673 15:46:24 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme1n1 ]] 00:27:54.673 15:46:24 -- nvmf/common.sh@641 -- # is_block_zoned nvme1n1 00:27:54.673 15:46:24 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:27:54.673 15:46:24 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:27:54.673 15:46:24 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:27:54.673 15:46:24 -- nvmf/common.sh@642 -- # block_in_use nvme1n1 00:27:54.673 15:46:24 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:27:54.673 15:46:24 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:27:54.673 No valid GPT data, bailing 00:27:54.673 15:46:24 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:27:54.673 15:46:24 -- scripts/common.sh@391 -- # pt= 00:27:54.673 15:46:24 -- scripts/common.sh@392 -- # return 1 00:27:54.673 15:46:24 -- nvmf/common.sh@642 -- # nvme=/dev/nvme1n1 00:27:54.673 15:46:24 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme1n1 ]] 00:27:54.673 15:46:24 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:27:54.673 15:46:24 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:27:54.673 15:46:24 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:54.673 15:46:24 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:27:54.673 15:46:24 -- nvmf/common.sh@656 -- # echo 1 00:27:54.673 15:46:24 -- nvmf/common.sh@657 -- # echo /dev/nvme1n1 00:27:54.673 15:46:24 -- nvmf/common.sh@658 -- # echo 1 00:27:54.673 15:46:24 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:27:54.673 15:46:24 -- nvmf/common.sh@661 -- # echo tcp 00:27:54.673 15:46:24 -- nvmf/common.sh@662 -- # echo 4420 00:27:54.673 15:46:24 -- nvmf/common.sh@663 -- # echo ipv4 00:27:54.673 15:46:24 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:54.673 15:46:24 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 --hostid=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 -a 10.0.0.1 -t tcp -s 4420 00:27:54.931 00:27:54.931 Discovery Log Number of Records 2, Generation counter 2 00:27:54.931 =====Discovery Log Entry 0====== 00:27:54.931 trtype: tcp 00:27:54.931 adrfam: ipv4 00:27:54.931 subtype: current discovery subsystem 00:27:54.931 treq: not specified, sq flow control disable supported 00:27:54.931 portid: 1 00:27:54.931 trsvcid: 4420 00:27:54.931 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:54.931 traddr: 10.0.0.1 00:27:54.931 eflags: none 00:27:54.931 sectype: none 00:27:54.931 =====Discovery Log Entry 1====== 00:27:54.931 trtype: tcp 00:27:54.931 adrfam: ipv4 00:27:54.931 subtype: nvme subsystem 00:27:54.931 treq: not specified, sq flow control disable supported 00:27:54.931 portid: 1 00:27:54.931 trsvcid: 4420 00:27:54.931 subnqn: nqn.2024-02.io.spdk:cnode0 00:27:54.931 traddr: 10.0.0.1 00:27:54.931 eflags: none 00:27:54.931 sectype: none 00:27:54.931 15:46:24 -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:27:54.932 15:46:24 -- host/auth.sh@37 -- # echo 0 00:27:54.932 15:46:24 -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:27:54.932 15:46:24 -- host/auth.sh@95 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:54.932 15:46:24 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:54.932 15:46:24 -- host/auth.sh@44 -- # digest=sha256 00:27:54.932 15:46:24 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:54.932 15:46:24 -- host/auth.sh@44 -- # keyid=1 00:27:54.932 15:46:24 -- host/auth.sh@45 -- # key=DHHC-1:00:Y2E1NjE3NGM4ZDZkOTRhOGJiMjc2ZjYzODRkNThhYTExN2RmMzJkYTE5YmM2OTU0UBZ03g==: 00:27:54.932 15:46:24 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:54.932 15:46:24 -- host/auth.sh@48 -- # echo ffdhe2048 00:27:54.932 15:46:25 -- host/auth.sh@49 -- # echo DHHC-1:00:Y2E1NjE3NGM4ZDZkOTRhOGJiMjc2ZjYzODRkNThhYTExN2RmMzJkYTE5YmM2OTU0UBZ03g==: 00:27:54.932 15:46:25 -- host/auth.sh@100 -- # IFS=, 00:27:54.932 15:46:25 -- host/auth.sh@101 -- # printf %s sha256,sha384,sha512 00:27:54.932 15:46:25 -- host/auth.sh@100 -- # IFS=, 00:27:54.932 15:46:25 -- host/auth.sh@101 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:54.932 15:46:25 -- host/auth.sh@100 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:27:54.932 15:46:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:54.932 15:46:25 -- host/auth.sh@68 -- # digest=sha256,sha384,sha512 00:27:54.932 15:46:25 -- host/auth.sh@68 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:54.932 15:46:25 -- host/auth.sh@68 -- # keyid=1 00:27:54.932 15:46:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:27:54.932 15:46:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:54.932 15:46:25 -- common/autotest_common.sh@10 -- # set +x 00:27:54.932 15:46:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:54.932 15:46:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:54.932 15:46:25 -- nvmf/common.sh@717 -- # local ip 00:27:54.932 15:46:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:54.932 15:46:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:54.932 15:46:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:54.932 15:46:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:54.932 15:46:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:54.932 15:46:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:54.932 15:46:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:54.932 15:46:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:54.932 15:46:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:54.932 15:46:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:27:54.932 15:46:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:54.932 15:46:25 -- common/autotest_common.sh@10 -- # set +x 00:27:54.932 nvme0n1 00:27:54.932 15:46:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:54.932 15:46:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:54.932 15:46:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:54.932 15:46:25 -- common/autotest_common.sh@10 -- # set +x 00:27:54.932 15:46:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:54.932 15:46:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.190 15:46:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.190 15:46:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.190 15:46:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.190 15:46:25 -- common/autotest_common.sh@10 -- # set +x 00:27:55.190 15:46:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.190 15:46:25 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:27:55.190 15:46:25 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:27:55.190 15:46:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:55.190 15:46:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:27:55.190 15:46:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:55.191 15:46:25 -- host/auth.sh@44 -- # digest=sha256 00:27:55.191 15:46:25 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:55.191 15:46:25 -- host/auth.sh@44 -- # keyid=0 00:27:55.191 15:46:25 -- host/auth.sh@45 -- # key=DHHC-1:00:YjlhZjhkOTgyNThkMzQ0ZTAwMGEzNzQ0OTM2ZjY5MmHvLwiC: 00:27:55.191 15:46:25 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:55.191 15:46:25 -- host/auth.sh@48 -- # echo ffdhe2048 00:27:55.191 15:46:25 -- host/auth.sh@49 -- # echo DHHC-1:00:YjlhZjhkOTgyNThkMzQ0ZTAwMGEzNzQ0OTM2ZjY5MmHvLwiC: 00:27:55.191 15:46:25 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 0 00:27:55.191 15:46:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:55.191 15:46:25 -- host/auth.sh@68 -- # digest=sha256 00:27:55.191 15:46:25 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:27:55.191 15:46:25 -- host/auth.sh@68 -- # keyid=0 00:27:55.191 15:46:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:55.191 15:46:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.191 15:46:25 -- common/autotest_common.sh@10 -- # set +x 00:27:55.191 15:46:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.191 15:46:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:55.191 15:46:25 -- nvmf/common.sh@717 -- # local ip 00:27:55.191 15:46:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:55.191 15:46:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:55.191 15:46:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.191 15:46:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.191 15:46:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:55.191 15:46:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.191 15:46:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:55.191 15:46:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:55.191 15:46:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:55.191 15:46:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:27:55.191 15:46:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.191 15:46:25 -- common/autotest_common.sh@10 -- # set +x 00:27:55.191 nvme0n1 00:27:55.191 15:46:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.191 15:46:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.191 15:46:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:55.191 15:46:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.191 15:46:25 -- common/autotest_common.sh@10 -- # set +x 00:27:55.191 15:46:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.191 15:46:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.191 15:46:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.191 15:46:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.191 15:46:25 -- common/autotest_common.sh@10 -- # set +x 00:27:55.191 15:46:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.191 15:46:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:55.191 15:46:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:27:55.191 15:46:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:55.191 15:46:25 -- host/auth.sh@44 -- # digest=sha256 00:27:55.191 15:46:25 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:55.191 15:46:25 -- host/auth.sh@44 -- # keyid=1 00:27:55.191 15:46:25 -- host/auth.sh@45 -- # key=DHHC-1:00:Y2E1NjE3NGM4ZDZkOTRhOGJiMjc2ZjYzODRkNThhYTExN2RmMzJkYTE5YmM2OTU0UBZ03g==: 00:27:55.191 15:46:25 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:55.191 15:46:25 -- host/auth.sh@48 -- # echo ffdhe2048 00:27:55.191 15:46:25 -- host/auth.sh@49 -- # echo DHHC-1:00:Y2E1NjE3NGM4ZDZkOTRhOGJiMjc2ZjYzODRkNThhYTExN2RmMzJkYTE5YmM2OTU0UBZ03g==: 00:27:55.191 15:46:25 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 1 00:27:55.191 15:46:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:55.191 15:46:25 -- host/auth.sh@68 -- # digest=sha256 00:27:55.191 15:46:25 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:27:55.191 15:46:25 -- host/auth.sh@68 -- # keyid=1 00:27:55.191 15:46:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:55.191 15:46:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.191 15:46:25 -- common/autotest_common.sh@10 -- # set +x 00:27:55.191 15:46:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.191 15:46:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:55.191 15:46:25 -- nvmf/common.sh@717 -- # local ip 00:27:55.191 15:46:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:55.191 15:46:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:55.191 15:46:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.191 15:46:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.191 15:46:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:55.191 15:46:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.191 15:46:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:55.191 15:46:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:55.191 15:46:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:55.191 15:46:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:27:55.191 15:46:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.191 15:46:25 -- common/autotest_common.sh@10 -- # set +x 00:27:55.450 nvme0n1 00:27:55.450 15:46:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.450 15:46:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.450 15:46:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.450 15:46:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:55.450 15:46:25 -- common/autotest_common.sh@10 -- # set +x 00:27:55.450 15:46:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.450 15:46:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.450 15:46:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.450 15:46:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.450 15:46:25 -- common/autotest_common.sh@10 -- # set +x 00:27:55.450 15:46:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.450 15:46:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:55.450 15:46:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:27:55.450 15:46:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:55.450 15:46:25 -- host/auth.sh@44 -- # digest=sha256 00:27:55.450 15:46:25 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:55.450 15:46:25 -- host/auth.sh@44 -- # keyid=2 00:27:55.450 15:46:25 -- host/auth.sh@45 -- # key=DHHC-1:01:ZmE2MzM4MmM5NTJlZWUxMWFhYjFkYjhhNTQxNDhmZDLFP6Rf: 00:27:55.450 15:46:25 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:55.450 15:46:25 -- host/auth.sh@48 -- # echo ffdhe2048 00:27:55.450 15:46:25 -- host/auth.sh@49 -- # echo DHHC-1:01:ZmE2MzM4MmM5NTJlZWUxMWFhYjFkYjhhNTQxNDhmZDLFP6Rf: 00:27:55.450 15:46:25 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 2 00:27:55.450 15:46:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:55.450 15:46:25 -- host/auth.sh@68 -- # digest=sha256 00:27:55.450 15:46:25 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:27:55.450 15:46:25 -- host/auth.sh@68 -- # keyid=2 00:27:55.450 15:46:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:55.450 15:46:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.450 15:46:25 -- common/autotest_common.sh@10 -- # set +x 00:27:55.450 15:46:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.450 15:46:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:55.450 15:46:25 -- nvmf/common.sh@717 -- # local ip 00:27:55.450 15:46:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:55.450 15:46:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:55.450 15:46:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.450 15:46:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.450 15:46:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:55.450 15:46:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.450 15:46:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:55.450 15:46:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:55.450 15:46:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:55.450 15:46:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:55.450 15:46:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.450 15:46:25 -- common/autotest_common.sh@10 -- # set +x 00:27:55.450 nvme0n1 00:27:55.450 15:46:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.450 15:46:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.450 15:46:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:55.450 15:46:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.450 15:46:25 -- common/autotest_common.sh@10 -- # set +x 00:27:55.709 15:46:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.709 15:46:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.709 15:46:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.709 15:46:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.709 15:46:25 -- common/autotest_common.sh@10 -- # set +x 00:27:55.709 15:46:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.709 15:46:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:55.709 15:46:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:27:55.709 15:46:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:55.709 15:46:25 -- host/auth.sh@44 -- # digest=sha256 00:27:55.709 15:46:25 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:55.709 15:46:25 -- host/auth.sh@44 -- # keyid=3 00:27:55.709 15:46:25 -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg4ZmIyZGFjNWY1YjVjZjc1MTg5Mzc0MzYzZTA3NjEzM2VkZDU4MTQzNmNkMWI1/bvVZw==: 00:27:55.709 15:46:25 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:55.709 15:46:25 -- host/auth.sh@48 -- # echo ffdhe2048 00:27:55.709 15:46:25 -- host/auth.sh@49 -- # echo DHHC-1:02:Mjg4ZmIyZGFjNWY1YjVjZjc1MTg5Mzc0MzYzZTA3NjEzM2VkZDU4MTQzNmNkMWI1/bvVZw==: 00:27:55.709 15:46:25 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 3 00:27:55.709 15:46:25 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:55.709 15:46:25 -- host/auth.sh@68 -- # digest=sha256 00:27:55.709 15:46:25 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:27:55.709 15:46:25 -- host/auth.sh@68 -- # keyid=3 00:27:55.709 15:46:25 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:55.709 15:46:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.709 15:46:25 -- common/autotest_common.sh@10 -- # set +x 00:27:55.709 15:46:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.709 15:46:25 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:55.709 15:46:25 -- nvmf/common.sh@717 -- # local ip 00:27:55.709 15:46:25 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:55.709 15:46:25 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:55.709 15:46:25 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.709 15:46:25 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.709 15:46:25 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:55.709 15:46:25 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.709 15:46:25 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:55.709 15:46:25 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:55.709 15:46:25 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:55.709 15:46:25 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:27:55.709 15:46:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.709 15:46:25 -- common/autotest_common.sh@10 -- # set +x 00:27:55.709 nvme0n1 00:27:55.709 15:46:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.709 15:46:25 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.709 15:46:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.709 15:46:25 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:55.709 15:46:25 -- common/autotest_common.sh@10 -- # set +x 00:27:55.709 15:46:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.709 15:46:25 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.709 15:46:25 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.709 15:46:25 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.709 15:46:25 -- common/autotest_common.sh@10 -- # set +x 00:27:55.709 15:46:25 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.709 15:46:25 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:55.709 15:46:25 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:27:55.709 15:46:25 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:55.709 15:46:25 -- host/auth.sh@44 -- # digest=sha256 00:27:55.709 15:46:25 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:27:55.709 15:46:25 -- host/auth.sh@44 -- # keyid=4 00:27:55.709 15:46:25 -- host/auth.sh@45 -- # key=DHHC-1:03:YzhlMTZkZDc5MGU2N2VkMDBhODBiNmM5YzdkNjdmNDU2MDcyNjM1YzE2YzE4NzFhNDFmYjM1MGMwODM3MjczMliWbZE=: 00:27:55.709 15:46:25 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:55.709 15:46:25 -- host/auth.sh@48 -- # echo ffdhe2048 00:27:55.709 15:46:25 -- host/auth.sh@49 -- # echo DHHC-1:03:YzhlMTZkZDc5MGU2N2VkMDBhODBiNmM5YzdkNjdmNDU2MDcyNjM1YzE2YzE4NzFhNDFmYjM1MGMwODM3MjczMliWbZE=: 00:27:55.967 15:46:26 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe2048 4 00:27:55.967 15:46:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:55.967 15:46:26 -- host/auth.sh@68 -- # digest=sha256 00:27:55.967 15:46:26 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:27:55.967 15:46:26 -- host/auth.sh@68 -- # keyid=4 00:27:55.967 15:46:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:27:55.967 15:46:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.967 15:46:26 -- common/autotest_common.sh@10 -- # set +x 00:27:55.968 15:46:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.968 15:46:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:55.968 15:46:26 -- nvmf/common.sh@717 -- # local ip 00:27:55.968 15:46:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:55.968 15:46:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:55.968 15:46:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:55.968 15:46:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:55.968 15:46:26 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:55.968 15:46:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:55.968 15:46:26 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:55.968 15:46:26 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:55.968 15:46:26 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:55.968 15:46:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:55.968 15:46:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.968 15:46:26 -- common/autotest_common.sh@10 -- # set +x 00:27:55.968 nvme0n1 00:27:55.968 15:46:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.968 15:46:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:55.968 15:46:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:55.968 15:46:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.968 15:46:26 -- common/autotest_common.sh@10 -- # set +x 00:27:55.968 15:46:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.968 15:46:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:55.968 15:46:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:55.968 15:46:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:55.968 15:46:26 -- common/autotest_common.sh@10 -- # set +x 00:27:55.968 15:46:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:55.968 15:46:26 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:27:55.968 15:46:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:55.968 15:46:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:27:55.968 15:46:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:55.968 15:46:26 -- host/auth.sh@44 -- # digest=sha256 00:27:55.968 15:46:26 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:55.968 15:46:26 -- host/auth.sh@44 -- # keyid=0 00:27:55.968 15:46:26 -- host/auth.sh@45 -- # key=DHHC-1:00:YjlhZjhkOTgyNThkMzQ0ZTAwMGEzNzQ0OTM2ZjY5MmHvLwiC: 00:27:55.968 15:46:26 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:55.968 15:46:26 -- host/auth.sh@48 -- # echo ffdhe3072 00:27:56.226 15:46:26 -- host/auth.sh@49 -- # echo DHHC-1:00:YjlhZjhkOTgyNThkMzQ0ZTAwMGEzNzQ0OTM2ZjY5MmHvLwiC: 00:27:56.226 15:46:26 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 0 00:27:56.226 15:46:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:56.226 15:46:26 -- host/auth.sh@68 -- # digest=sha256 00:27:56.226 15:46:26 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:27:56.226 15:46:26 -- host/auth.sh@68 -- # keyid=0 00:27:56.226 15:46:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:56.226 15:46:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:56.226 15:46:26 -- common/autotest_common.sh@10 -- # set +x 00:27:56.524 15:46:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:56.524 15:46:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:56.524 15:46:26 -- nvmf/common.sh@717 -- # local ip 00:27:56.524 15:46:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:56.524 15:46:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:56.524 15:46:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.524 15:46:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.524 15:46:26 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:56.524 15:46:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.524 15:46:26 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:56.524 15:46:26 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:56.524 15:46:26 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:56.524 15:46:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:27:56.524 15:46:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:56.524 15:46:26 -- common/autotest_common.sh@10 -- # set +x 00:27:56.524 nvme0n1 00:27:56.524 15:46:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:56.524 15:46:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.524 15:46:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:56.524 15:46:26 -- common/autotest_common.sh@10 -- # set +x 00:27:56.524 15:46:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:56.524 15:46:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:56.524 15:46:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.524 15:46:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.524 15:46:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:56.524 15:46:26 -- common/autotest_common.sh@10 -- # set +x 00:27:56.524 15:46:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:56.524 15:46:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:56.524 15:46:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:27:56.524 15:46:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:56.524 15:46:26 -- host/auth.sh@44 -- # digest=sha256 00:27:56.524 15:46:26 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:56.524 15:46:26 -- host/auth.sh@44 -- # keyid=1 00:27:56.524 15:46:26 -- host/auth.sh@45 -- # key=DHHC-1:00:Y2E1NjE3NGM4ZDZkOTRhOGJiMjc2ZjYzODRkNThhYTExN2RmMzJkYTE5YmM2OTU0UBZ03g==: 00:27:56.524 15:46:26 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:56.524 15:46:26 -- host/auth.sh@48 -- # echo ffdhe3072 00:27:56.524 15:46:26 -- host/auth.sh@49 -- # echo DHHC-1:00:Y2E1NjE3NGM4ZDZkOTRhOGJiMjc2ZjYzODRkNThhYTExN2RmMzJkYTE5YmM2OTU0UBZ03g==: 00:27:56.524 15:46:26 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 1 00:27:56.524 15:46:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:56.524 15:46:26 -- host/auth.sh@68 -- # digest=sha256 00:27:56.524 15:46:26 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:27:56.524 15:46:26 -- host/auth.sh@68 -- # keyid=1 00:27:56.524 15:46:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:56.524 15:46:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:56.524 15:46:26 -- common/autotest_common.sh@10 -- # set +x 00:27:56.524 15:46:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:56.524 15:46:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:56.524 15:46:26 -- nvmf/common.sh@717 -- # local ip 00:27:56.524 15:46:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:56.524 15:46:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:56.524 15:46:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.524 15:46:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.524 15:46:26 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:56.524 15:46:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.524 15:46:26 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:56.524 15:46:26 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:56.524 15:46:26 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:56.524 15:46:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:27:56.524 15:46:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:56.524 15:46:26 -- common/autotest_common.sh@10 -- # set +x 00:27:56.783 nvme0n1 00:27:56.783 15:46:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:56.783 15:46:26 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.784 15:46:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:56.784 15:46:26 -- common/autotest_common.sh@10 -- # set +x 00:27:56.784 15:46:26 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:56.784 15:46:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:56.784 15:46:26 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:56.784 15:46:26 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:56.784 15:46:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:56.784 15:46:26 -- common/autotest_common.sh@10 -- # set +x 00:27:56.784 15:46:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:56.784 15:46:26 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:56.784 15:46:26 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:27:56.784 15:46:26 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:56.784 15:46:26 -- host/auth.sh@44 -- # digest=sha256 00:27:56.784 15:46:26 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:56.784 15:46:26 -- host/auth.sh@44 -- # keyid=2 00:27:56.784 15:46:26 -- host/auth.sh@45 -- # key=DHHC-1:01:ZmE2MzM4MmM5NTJlZWUxMWFhYjFkYjhhNTQxNDhmZDLFP6Rf: 00:27:56.784 15:46:26 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:56.784 15:46:26 -- host/auth.sh@48 -- # echo ffdhe3072 00:27:56.784 15:46:26 -- host/auth.sh@49 -- # echo DHHC-1:01:ZmE2MzM4MmM5NTJlZWUxMWFhYjFkYjhhNTQxNDhmZDLFP6Rf: 00:27:56.784 15:46:26 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 2 00:27:56.784 15:46:26 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:56.784 15:46:26 -- host/auth.sh@68 -- # digest=sha256 00:27:56.784 15:46:26 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:27:56.784 15:46:26 -- host/auth.sh@68 -- # keyid=2 00:27:56.784 15:46:26 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:56.784 15:46:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:56.784 15:46:26 -- common/autotest_common.sh@10 -- # set +x 00:27:56.784 15:46:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:56.784 15:46:26 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:56.784 15:46:26 -- nvmf/common.sh@717 -- # local ip 00:27:56.784 15:46:26 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:56.784 15:46:26 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:56.784 15:46:26 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:56.784 15:46:26 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:56.784 15:46:26 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:56.784 15:46:26 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:56.784 15:46:26 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:56.784 15:46:26 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:56.784 15:46:26 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:56.784 15:46:26 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:56.784 15:46:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:56.784 15:46:26 -- common/autotest_common.sh@10 -- # set +x 00:27:56.784 nvme0n1 00:27:56.784 15:46:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:56.784 15:46:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:56.784 15:46:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:56.784 15:46:27 -- common/autotest_common.sh@10 -- # set +x 00:27:56.784 15:46:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:56.784 15:46:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.041 15:46:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.041 15:46:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.041 15:46:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.041 15:46:27 -- common/autotest_common.sh@10 -- # set +x 00:27:57.041 15:46:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.041 15:46:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:57.042 15:46:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:27:57.042 15:46:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:57.042 15:46:27 -- host/auth.sh@44 -- # digest=sha256 00:27:57.042 15:46:27 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:57.042 15:46:27 -- host/auth.sh@44 -- # keyid=3 00:27:57.042 15:46:27 -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg4ZmIyZGFjNWY1YjVjZjc1MTg5Mzc0MzYzZTA3NjEzM2VkZDU4MTQzNmNkMWI1/bvVZw==: 00:27:57.042 15:46:27 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:57.042 15:46:27 -- host/auth.sh@48 -- # echo ffdhe3072 00:27:57.042 15:46:27 -- host/auth.sh@49 -- # echo DHHC-1:02:Mjg4ZmIyZGFjNWY1YjVjZjc1MTg5Mzc0MzYzZTA3NjEzM2VkZDU4MTQzNmNkMWI1/bvVZw==: 00:27:57.042 15:46:27 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 3 00:27:57.042 15:46:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:57.042 15:46:27 -- host/auth.sh@68 -- # digest=sha256 00:27:57.042 15:46:27 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:27:57.042 15:46:27 -- host/auth.sh@68 -- # keyid=3 00:27:57.042 15:46:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:57.042 15:46:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.042 15:46:27 -- common/autotest_common.sh@10 -- # set +x 00:27:57.042 15:46:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.042 15:46:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:57.042 15:46:27 -- nvmf/common.sh@717 -- # local ip 00:27:57.042 15:46:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:57.042 15:46:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:57.042 15:46:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.042 15:46:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.042 15:46:27 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:57.042 15:46:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.042 15:46:27 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:57.042 15:46:27 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:57.042 15:46:27 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:57.042 15:46:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:27:57.042 15:46:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.042 15:46:27 -- common/autotest_common.sh@10 -- # set +x 00:27:57.042 nvme0n1 00:27:57.042 15:46:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.042 15:46:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.042 15:46:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.042 15:46:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:57.042 15:46:27 -- common/autotest_common.sh@10 -- # set +x 00:27:57.042 15:46:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.042 15:46:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.042 15:46:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.042 15:46:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.042 15:46:27 -- common/autotest_common.sh@10 -- # set +x 00:27:57.042 15:46:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.042 15:46:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:57.042 15:46:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:27:57.042 15:46:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:57.042 15:46:27 -- host/auth.sh@44 -- # digest=sha256 00:27:57.042 15:46:27 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:27:57.042 15:46:27 -- host/auth.sh@44 -- # keyid=4 00:27:57.042 15:46:27 -- host/auth.sh@45 -- # key=DHHC-1:03:YzhlMTZkZDc5MGU2N2VkMDBhODBiNmM5YzdkNjdmNDU2MDcyNjM1YzE2YzE4NzFhNDFmYjM1MGMwODM3MjczMliWbZE=: 00:27:57.042 15:46:27 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:57.042 15:46:27 -- host/auth.sh@48 -- # echo ffdhe3072 00:27:57.042 15:46:27 -- host/auth.sh@49 -- # echo DHHC-1:03:YzhlMTZkZDc5MGU2N2VkMDBhODBiNmM5YzdkNjdmNDU2MDcyNjM1YzE2YzE4NzFhNDFmYjM1MGMwODM3MjczMliWbZE=: 00:27:57.042 15:46:27 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe3072 4 00:27:57.042 15:46:27 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:57.042 15:46:27 -- host/auth.sh@68 -- # digest=sha256 00:27:57.042 15:46:27 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:27:57.042 15:46:27 -- host/auth.sh@68 -- # keyid=4 00:27:57.042 15:46:27 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:27:57.042 15:46:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.042 15:46:27 -- common/autotest_common.sh@10 -- # set +x 00:27:57.042 15:46:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.042 15:46:27 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:57.042 15:46:27 -- nvmf/common.sh@717 -- # local ip 00:27:57.042 15:46:27 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:57.042 15:46:27 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:57.042 15:46:27 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.042 15:46:27 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.042 15:46:27 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:57.042 15:46:27 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.042 15:46:27 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:57.042 15:46:27 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:57.042 15:46:27 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:57.042 15:46:27 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:57.042 15:46:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.042 15:46:27 -- common/autotest_common.sh@10 -- # set +x 00:27:57.301 nvme0n1 00:27:57.301 15:46:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.301 15:46:27 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:57.301 15:46:27 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:57.301 15:46:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.301 15:46:27 -- common/autotest_common.sh@10 -- # set +x 00:27:57.301 15:46:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.301 15:46:27 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:57.301 15:46:27 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:57.301 15:46:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.301 15:46:27 -- common/autotest_common.sh@10 -- # set +x 00:27:57.301 15:46:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.301 15:46:27 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:27:57.301 15:46:27 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:57.301 15:46:27 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:27:57.301 15:46:27 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:57.301 15:46:27 -- host/auth.sh@44 -- # digest=sha256 00:27:57.301 15:46:27 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:57.301 15:46:27 -- host/auth.sh@44 -- # keyid=0 00:27:57.301 15:46:27 -- host/auth.sh@45 -- # key=DHHC-1:00:YjlhZjhkOTgyNThkMzQ0ZTAwMGEzNzQ0OTM2ZjY5MmHvLwiC: 00:27:57.301 15:46:27 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:57.301 15:46:27 -- host/auth.sh@48 -- # echo ffdhe4096 00:27:57.914 15:46:28 -- host/auth.sh@49 -- # echo DHHC-1:00:YjlhZjhkOTgyNThkMzQ0ZTAwMGEzNzQ0OTM2ZjY5MmHvLwiC: 00:27:57.914 15:46:28 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 0 00:27:57.914 15:46:28 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:57.914 15:46:28 -- host/auth.sh@68 -- # digest=sha256 00:27:57.914 15:46:28 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:27:57.914 15:46:28 -- host/auth.sh@68 -- # keyid=0 00:27:57.914 15:46:28 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:57.914 15:46:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.914 15:46:28 -- common/autotest_common.sh@10 -- # set +x 00:27:57.914 15:46:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:57.914 15:46:28 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:57.914 15:46:28 -- nvmf/common.sh@717 -- # local ip 00:27:57.914 15:46:28 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:57.914 15:46:28 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:57.914 15:46:28 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:57.914 15:46:28 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:57.914 15:46:28 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:57.914 15:46:28 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:57.914 15:46:28 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:57.914 15:46:28 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:57.914 15:46:28 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:57.914 15:46:28 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:27:57.914 15:46:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:57.914 15:46:28 -- common/autotest_common.sh@10 -- # set +x 00:27:58.173 nvme0n1 00:27:58.173 15:46:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.173 15:46:28 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.173 15:46:28 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:58.173 15:46:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.173 15:46:28 -- common/autotest_common.sh@10 -- # set +x 00:27:58.173 15:46:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.173 15:46:28 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.173 15:46:28 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.173 15:46:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.173 15:46:28 -- common/autotest_common.sh@10 -- # set +x 00:27:58.173 15:46:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.173 15:46:28 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:58.173 15:46:28 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:27:58.173 15:46:28 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:58.173 15:46:28 -- host/auth.sh@44 -- # digest=sha256 00:27:58.173 15:46:28 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:58.173 15:46:28 -- host/auth.sh@44 -- # keyid=1 00:27:58.173 15:46:28 -- host/auth.sh@45 -- # key=DHHC-1:00:Y2E1NjE3NGM4ZDZkOTRhOGJiMjc2ZjYzODRkNThhYTExN2RmMzJkYTE5YmM2OTU0UBZ03g==: 00:27:58.173 15:46:28 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:58.173 15:46:28 -- host/auth.sh@48 -- # echo ffdhe4096 00:27:58.173 15:46:28 -- host/auth.sh@49 -- # echo DHHC-1:00:Y2E1NjE3NGM4ZDZkOTRhOGJiMjc2ZjYzODRkNThhYTExN2RmMzJkYTE5YmM2OTU0UBZ03g==: 00:27:58.173 15:46:28 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 1 00:27:58.173 15:46:28 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:58.173 15:46:28 -- host/auth.sh@68 -- # digest=sha256 00:27:58.173 15:46:28 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:27:58.173 15:46:28 -- host/auth.sh@68 -- # keyid=1 00:27:58.173 15:46:28 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:58.173 15:46:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.173 15:46:28 -- common/autotest_common.sh@10 -- # set +x 00:27:58.173 15:46:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.173 15:46:28 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:58.173 15:46:28 -- nvmf/common.sh@717 -- # local ip 00:27:58.173 15:46:28 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:58.173 15:46:28 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:58.173 15:46:28 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.173 15:46:28 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.173 15:46:28 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:58.173 15:46:28 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.173 15:46:28 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:58.173 15:46:28 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:58.173 15:46:28 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:58.173 15:46:28 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:27:58.173 15:46:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.173 15:46:28 -- common/autotest_common.sh@10 -- # set +x 00:27:58.430 nvme0n1 00:27:58.430 15:46:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.430 15:46:28 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.430 15:46:28 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:58.430 15:46:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.430 15:46:28 -- common/autotest_common.sh@10 -- # set +x 00:27:58.430 15:46:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.430 15:46:28 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.430 15:46:28 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.430 15:46:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.430 15:46:28 -- common/autotest_common.sh@10 -- # set +x 00:27:58.688 15:46:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.688 15:46:28 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:58.688 15:46:28 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:27:58.688 15:46:28 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:58.688 15:46:28 -- host/auth.sh@44 -- # digest=sha256 00:27:58.688 15:46:28 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:58.688 15:46:28 -- host/auth.sh@44 -- # keyid=2 00:27:58.688 15:46:28 -- host/auth.sh@45 -- # key=DHHC-1:01:ZmE2MzM4MmM5NTJlZWUxMWFhYjFkYjhhNTQxNDhmZDLFP6Rf: 00:27:58.688 15:46:28 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:58.688 15:46:28 -- host/auth.sh@48 -- # echo ffdhe4096 00:27:58.688 15:46:28 -- host/auth.sh@49 -- # echo DHHC-1:01:ZmE2MzM4MmM5NTJlZWUxMWFhYjFkYjhhNTQxNDhmZDLFP6Rf: 00:27:58.688 15:46:28 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 2 00:27:58.688 15:46:28 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:58.688 15:46:28 -- host/auth.sh@68 -- # digest=sha256 00:27:58.688 15:46:28 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:27:58.688 15:46:28 -- host/auth.sh@68 -- # keyid=2 00:27:58.688 15:46:28 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:58.688 15:46:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.688 15:46:28 -- common/autotest_common.sh@10 -- # set +x 00:27:58.688 15:46:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.688 15:46:28 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:58.688 15:46:28 -- nvmf/common.sh@717 -- # local ip 00:27:58.688 15:46:28 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:58.688 15:46:28 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:58.688 15:46:28 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.688 15:46:28 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.688 15:46:28 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:58.688 15:46:28 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.688 15:46:28 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:58.688 15:46:28 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:58.688 15:46:28 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:58.688 15:46:28 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:27:58.688 15:46:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.688 15:46:28 -- common/autotest_common.sh@10 -- # set +x 00:27:58.688 nvme0n1 00:27:58.688 15:46:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.688 15:46:28 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.688 15:46:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.688 15:46:28 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:58.688 15:46:28 -- common/autotest_common.sh@10 -- # set +x 00:27:58.688 15:46:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.947 15:46:29 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:58.947 15:46:29 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:58.947 15:46:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.947 15:46:29 -- common/autotest_common.sh@10 -- # set +x 00:27:58.947 15:46:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.947 15:46:29 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:58.947 15:46:29 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:27:58.947 15:46:29 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:58.947 15:46:29 -- host/auth.sh@44 -- # digest=sha256 00:27:58.947 15:46:29 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:58.947 15:46:29 -- host/auth.sh@44 -- # keyid=3 00:27:58.947 15:46:29 -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg4ZmIyZGFjNWY1YjVjZjc1MTg5Mzc0MzYzZTA3NjEzM2VkZDU4MTQzNmNkMWI1/bvVZw==: 00:27:58.947 15:46:29 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:58.947 15:46:29 -- host/auth.sh@48 -- # echo ffdhe4096 00:27:58.947 15:46:29 -- host/auth.sh@49 -- # echo DHHC-1:02:Mjg4ZmIyZGFjNWY1YjVjZjc1MTg5Mzc0MzYzZTA3NjEzM2VkZDU4MTQzNmNkMWI1/bvVZw==: 00:27:58.947 15:46:29 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 3 00:27:58.947 15:46:29 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:58.947 15:46:29 -- host/auth.sh@68 -- # digest=sha256 00:27:58.947 15:46:29 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:27:58.947 15:46:29 -- host/auth.sh@68 -- # keyid=3 00:27:58.947 15:46:29 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:58.947 15:46:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.947 15:46:29 -- common/autotest_common.sh@10 -- # set +x 00:27:58.947 15:46:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.947 15:46:29 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:58.947 15:46:29 -- nvmf/common.sh@717 -- # local ip 00:27:58.947 15:46:29 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:58.947 15:46:29 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:58.947 15:46:29 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:58.947 15:46:29 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:58.947 15:46:29 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:58.947 15:46:29 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:58.947 15:46:29 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:58.947 15:46:29 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:58.947 15:46:29 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:58.947 15:46:29 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:27:58.947 15:46:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.947 15:46:29 -- common/autotest_common.sh@10 -- # set +x 00:27:58.947 nvme0n1 00:27:58.947 15:46:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:58.947 15:46:29 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:58.947 15:46:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:58.947 15:46:29 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:58.947 15:46:29 -- common/autotest_common.sh@10 -- # set +x 00:27:59.205 15:46:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:59.205 15:46:29 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.205 15:46:29 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.205 15:46:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:59.205 15:46:29 -- common/autotest_common.sh@10 -- # set +x 00:27:59.205 15:46:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:59.205 15:46:29 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:59.205 15:46:29 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:27:59.205 15:46:29 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:59.205 15:46:29 -- host/auth.sh@44 -- # digest=sha256 00:27:59.205 15:46:29 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:27:59.205 15:46:29 -- host/auth.sh@44 -- # keyid=4 00:27:59.205 15:46:29 -- host/auth.sh@45 -- # key=DHHC-1:03:YzhlMTZkZDc5MGU2N2VkMDBhODBiNmM5YzdkNjdmNDU2MDcyNjM1YzE2YzE4NzFhNDFmYjM1MGMwODM3MjczMliWbZE=: 00:27:59.205 15:46:29 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:59.205 15:46:29 -- host/auth.sh@48 -- # echo ffdhe4096 00:27:59.205 15:46:29 -- host/auth.sh@49 -- # echo DHHC-1:03:YzhlMTZkZDc5MGU2N2VkMDBhODBiNmM5YzdkNjdmNDU2MDcyNjM1YzE2YzE4NzFhNDFmYjM1MGMwODM3MjczMliWbZE=: 00:27:59.205 15:46:29 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe4096 4 00:27:59.205 15:46:29 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:27:59.205 15:46:29 -- host/auth.sh@68 -- # digest=sha256 00:27:59.205 15:46:29 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:27:59.205 15:46:29 -- host/auth.sh@68 -- # keyid=4 00:27:59.205 15:46:29 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:27:59.205 15:46:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:59.205 15:46:29 -- common/autotest_common.sh@10 -- # set +x 00:27:59.205 15:46:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:59.205 15:46:29 -- host/auth.sh@70 -- # get_main_ns_ip 00:27:59.205 15:46:29 -- nvmf/common.sh@717 -- # local ip 00:27:59.205 15:46:29 -- nvmf/common.sh@718 -- # ip_candidates=() 00:27:59.205 15:46:29 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:27:59.205 15:46:29 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:59.205 15:46:29 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:59.205 15:46:29 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:27:59.205 15:46:29 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:59.205 15:46:29 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:27:59.205 15:46:29 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:27:59.205 15:46:29 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:27:59.205 15:46:29 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:27:59.205 15:46:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:59.205 15:46:29 -- common/autotest_common.sh@10 -- # set +x 00:27:59.463 nvme0n1 00:27:59.463 15:46:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:59.463 15:46:29 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:27:59.463 15:46:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:59.463 15:46:29 -- host/auth.sh@73 -- # jq -r '.[].name' 00:27:59.463 15:46:29 -- common/autotest_common.sh@10 -- # set +x 00:27:59.463 15:46:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:59.463 15:46:29 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:59.463 15:46:29 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:27:59.463 15:46:29 -- common/autotest_common.sh@549 -- # xtrace_disable 00:27:59.463 15:46:29 -- common/autotest_common.sh@10 -- # set +x 00:27:59.463 15:46:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:27:59.463 15:46:29 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:27:59.463 15:46:29 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:27:59.463 15:46:29 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:27:59.463 15:46:29 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:27:59.463 15:46:29 -- host/auth.sh@44 -- # digest=sha256 00:27:59.463 15:46:29 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:27:59.463 15:46:29 -- host/auth.sh@44 -- # keyid=0 00:27:59.463 15:46:29 -- host/auth.sh@45 -- # key=DHHC-1:00:YjlhZjhkOTgyNThkMzQ0ZTAwMGEzNzQ0OTM2ZjY5MmHvLwiC: 00:27:59.463 15:46:29 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:27:59.463 15:46:29 -- host/auth.sh@48 -- # echo ffdhe6144 00:28:01.459 15:46:31 -- host/auth.sh@49 -- # echo DHHC-1:00:YjlhZjhkOTgyNThkMzQ0ZTAwMGEzNzQ0OTM2ZjY5MmHvLwiC: 00:28:01.459 15:46:31 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 0 00:28:01.459 15:46:31 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:01.459 15:46:31 -- host/auth.sh@68 -- # digest=sha256 00:28:01.459 15:46:31 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:28:01.459 15:46:31 -- host/auth.sh@68 -- # keyid=0 00:28:01.459 15:46:31 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:01.459 15:46:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:01.459 15:46:31 -- common/autotest_common.sh@10 -- # set +x 00:28:01.459 15:46:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:01.459 15:46:31 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:01.459 15:46:31 -- nvmf/common.sh@717 -- # local ip 00:28:01.459 15:46:31 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:01.459 15:46:31 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:01.459 15:46:31 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.459 15:46:31 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.459 15:46:31 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:01.459 15:46:31 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.459 15:46:31 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:01.459 15:46:31 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:01.459 15:46:31 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:01.459 15:46:31 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:28:01.459 15:46:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:01.459 15:46:31 -- common/autotest_common.sh@10 -- # set +x 00:28:01.459 nvme0n1 00:28:01.459 15:46:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:01.459 15:46:31 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:01.459 15:46:31 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.459 15:46:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:01.459 15:46:31 -- common/autotest_common.sh@10 -- # set +x 00:28:01.459 15:46:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:01.717 15:46:31 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.717 15:46:31 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.717 15:46:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:01.717 15:46:31 -- common/autotest_common.sh@10 -- # set +x 00:28:01.717 15:46:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:01.717 15:46:31 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:01.717 15:46:31 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:28:01.717 15:46:31 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:01.717 15:46:31 -- host/auth.sh@44 -- # digest=sha256 00:28:01.717 15:46:31 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:01.717 15:46:31 -- host/auth.sh@44 -- # keyid=1 00:28:01.717 15:46:31 -- host/auth.sh@45 -- # key=DHHC-1:00:Y2E1NjE3NGM4ZDZkOTRhOGJiMjc2ZjYzODRkNThhYTExN2RmMzJkYTE5YmM2OTU0UBZ03g==: 00:28:01.717 15:46:31 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:28:01.717 15:46:31 -- host/auth.sh@48 -- # echo ffdhe6144 00:28:01.717 15:46:31 -- host/auth.sh@49 -- # echo DHHC-1:00:Y2E1NjE3NGM4ZDZkOTRhOGJiMjc2ZjYzODRkNThhYTExN2RmMzJkYTE5YmM2OTU0UBZ03g==: 00:28:01.717 15:46:31 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 1 00:28:01.717 15:46:31 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:01.717 15:46:31 -- host/auth.sh@68 -- # digest=sha256 00:28:01.717 15:46:31 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:28:01.717 15:46:31 -- host/auth.sh@68 -- # keyid=1 00:28:01.717 15:46:31 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:01.717 15:46:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:01.717 15:46:31 -- common/autotest_common.sh@10 -- # set +x 00:28:01.717 15:46:31 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:01.717 15:46:31 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:01.717 15:46:31 -- nvmf/common.sh@717 -- # local ip 00:28:01.717 15:46:31 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:01.717 15:46:31 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:01.717 15:46:31 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.717 15:46:31 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.718 15:46:31 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:01.718 15:46:31 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.718 15:46:31 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:01.718 15:46:31 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:01.718 15:46:31 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:01.718 15:46:31 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:28:01.718 15:46:31 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:01.718 15:46:31 -- common/autotest_common.sh@10 -- # set +x 00:28:01.975 nvme0n1 00:28:01.975 15:46:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:01.975 15:46:32 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:01.975 15:46:32 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:01.975 15:46:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:01.975 15:46:32 -- common/autotest_common.sh@10 -- # set +x 00:28:01.975 15:46:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:01.975 15:46:32 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:01.975 15:46:32 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:01.976 15:46:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:01.976 15:46:32 -- common/autotest_common.sh@10 -- # set +x 00:28:01.976 15:46:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:01.976 15:46:32 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:01.976 15:46:32 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:28:01.976 15:46:32 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:01.976 15:46:32 -- host/auth.sh@44 -- # digest=sha256 00:28:01.976 15:46:32 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:01.976 15:46:32 -- host/auth.sh@44 -- # keyid=2 00:28:01.976 15:46:32 -- host/auth.sh@45 -- # key=DHHC-1:01:ZmE2MzM4MmM5NTJlZWUxMWFhYjFkYjhhNTQxNDhmZDLFP6Rf: 00:28:01.976 15:46:32 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:28:01.976 15:46:32 -- host/auth.sh@48 -- # echo ffdhe6144 00:28:01.976 15:46:32 -- host/auth.sh@49 -- # echo DHHC-1:01:ZmE2MzM4MmM5NTJlZWUxMWFhYjFkYjhhNTQxNDhmZDLFP6Rf: 00:28:01.976 15:46:32 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 2 00:28:01.976 15:46:32 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:01.976 15:46:32 -- host/auth.sh@68 -- # digest=sha256 00:28:01.976 15:46:32 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:28:01.976 15:46:32 -- host/auth.sh@68 -- # keyid=2 00:28:01.976 15:46:32 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:01.976 15:46:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:01.976 15:46:32 -- common/autotest_common.sh@10 -- # set +x 00:28:01.976 15:46:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:01.976 15:46:32 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:01.976 15:46:32 -- nvmf/common.sh@717 -- # local ip 00:28:01.976 15:46:32 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:01.976 15:46:32 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:01.976 15:46:32 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:01.976 15:46:32 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:01.976 15:46:32 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:01.976 15:46:32 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:01.976 15:46:32 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:01.976 15:46:32 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:01.976 15:46:32 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:01.976 15:46:32 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:01.976 15:46:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:01.976 15:46:32 -- common/autotest_common.sh@10 -- # set +x 00:28:02.548 nvme0n1 00:28:02.548 15:46:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:02.548 15:46:32 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.548 15:46:32 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:02.548 15:46:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:02.548 15:46:32 -- common/autotest_common.sh@10 -- # set +x 00:28:02.548 15:46:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:02.548 15:46:32 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.548 15:46:32 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.548 15:46:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:02.548 15:46:32 -- common/autotest_common.sh@10 -- # set +x 00:28:02.548 15:46:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:02.548 15:46:32 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:02.548 15:46:32 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:28:02.548 15:46:32 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:02.548 15:46:32 -- host/auth.sh@44 -- # digest=sha256 00:28:02.548 15:46:32 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:02.548 15:46:32 -- host/auth.sh@44 -- # keyid=3 00:28:02.548 15:46:32 -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg4ZmIyZGFjNWY1YjVjZjc1MTg5Mzc0MzYzZTA3NjEzM2VkZDU4MTQzNmNkMWI1/bvVZw==: 00:28:02.548 15:46:32 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:28:02.548 15:46:32 -- host/auth.sh@48 -- # echo ffdhe6144 00:28:02.548 15:46:32 -- host/auth.sh@49 -- # echo DHHC-1:02:Mjg4ZmIyZGFjNWY1YjVjZjc1MTg5Mzc0MzYzZTA3NjEzM2VkZDU4MTQzNmNkMWI1/bvVZw==: 00:28:02.548 15:46:32 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 3 00:28:02.548 15:46:32 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:02.548 15:46:32 -- host/auth.sh@68 -- # digest=sha256 00:28:02.548 15:46:32 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:28:02.548 15:46:32 -- host/auth.sh@68 -- # keyid=3 00:28:02.548 15:46:32 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:02.548 15:46:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:02.548 15:46:32 -- common/autotest_common.sh@10 -- # set +x 00:28:02.548 15:46:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:02.548 15:46:32 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:02.548 15:46:32 -- nvmf/common.sh@717 -- # local ip 00:28:02.548 15:46:32 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:02.548 15:46:32 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:02.548 15:46:32 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:02.548 15:46:32 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:02.548 15:46:32 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:02.548 15:46:32 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:02.548 15:46:32 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:02.548 15:46:32 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:02.548 15:46:32 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:02.548 15:46:32 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:28:02.548 15:46:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:02.548 15:46:32 -- common/autotest_common.sh@10 -- # set +x 00:28:02.806 nvme0n1 00:28:02.806 15:46:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:02.806 15:46:33 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:02.806 15:46:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:02.806 15:46:33 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:02.806 15:46:33 -- common/autotest_common.sh@10 -- # set +x 00:28:02.807 15:46:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:02.807 15:46:33 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:02.807 15:46:33 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:02.807 15:46:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:02.807 15:46:33 -- common/autotest_common.sh@10 -- # set +x 00:28:02.807 15:46:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:02.807 15:46:33 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:02.807 15:46:33 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:28:02.807 15:46:33 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:02.807 15:46:33 -- host/auth.sh@44 -- # digest=sha256 00:28:02.807 15:46:33 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:02.807 15:46:33 -- host/auth.sh@44 -- # keyid=4 00:28:02.807 15:46:33 -- host/auth.sh@45 -- # key=DHHC-1:03:YzhlMTZkZDc5MGU2N2VkMDBhODBiNmM5YzdkNjdmNDU2MDcyNjM1YzE2YzE4NzFhNDFmYjM1MGMwODM3MjczMliWbZE=: 00:28:02.807 15:46:33 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:28:02.807 15:46:33 -- host/auth.sh@48 -- # echo ffdhe6144 00:28:02.807 15:46:33 -- host/auth.sh@49 -- # echo DHHC-1:03:YzhlMTZkZDc5MGU2N2VkMDBhODBiNmM5YzdkNjdmNDU2MDcyNjM1YzE2YzE4NzFhNDFmYjM1MGMwODM3MjczMliWbZE=: 00:28:02.807 15:46:33 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe6144 4 00:28:02.807 15:46:33 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:02.807 15:46:33 -- host/auth.sh@68 -- # digest=sha256 00:28:02.807 15:46:33 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:28:02.807 15:46:33 -- host/auth.sh@68 -- # keyid=4 00:28:02.807 15:46:33 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:02.807 15:46:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:02.807 15:46:33 -- common/autotest_common.sh@10 -- # set +x 00:28:03.064 15:46:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:03.064 15:46:33 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:03.064 15:46:33 -- nvmf/common.sh@717 -- # local ip 00:28:03.064 15:46:33 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:03.064 15:46:33 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:03.064 15:46:33 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:03.064 15:46:33 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:03.064 15:46:33 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:03.064 15:46:33 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:03.064 15:46:33 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:03.064 15:46:33 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:03.064 15:46:33 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:03.064 15:46:33 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:03.064 15:46:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:03.064 15:46:33 -- common/autotest_common.sh@10 -- # set +x 00:28:03.322 nvme0n1 00:28:03.322 15:46:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:03.322 15:46:33 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:03.322 15:46:33 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:03.322 15:46:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:03.322 15:46:33 -- common/autotest_common.sh@10 -- # set +x 00:28:03.322 15:46:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:03.322 15:46:33 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:03.322 15:46:33 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:03.322 15:46:33 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:03.322 15:46:33 -- common/autotest_common.sh@10 -- # set +x 00:28:03.322 15:46:33 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:03.322 15:46:33 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:28:03.322 15:46:33 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:03.322 15:46:33 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:28:03.322 15:46:33 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:03.322 15:46:33 -- host/auth.sh@44 -- # digest=sha256 00:28:03.322 15:46:33 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:03.322 15:46:33 -- host/auth.sh@44 -- # keyid=0 00:28:03.322 15:46:33 -- host/auth.sh@45 -- # key=DHHC-1:00:YjlhZjhkOTgyNThkMzQ0ZTAwMGEzNzQ0OTM2ZjY5MmHvLwiC: 00:28:03.322 15:46:33 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:28:03.322 15:46:33 -- host/auth.sh@48 -- # echo ffdhe8192 00:28:07.545 15:46:37 -- host/auth.sh@49 -- # echo DHHC-1:00:YjlhZjhkOTgyNThkMzQ0ZTAwMGEzNzQ0OTM2ZjY5MmHvLwiC: 00:28:07.545 15:46:37 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 0 00:28:07.545 15:46:37 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:07.545 15:46:37 -- host/auth.sh@68 -- # digest=sha256 00:28:07.545 15:46:37 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:28:07.545 15:46:37 -- host/auth.sh@68 -- # keyid=0 00:28:07.545 15:46:37 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:07.545 15:46:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:07.545 15:46:37 -- common/autotest_common.sh@10 -- # set +x 00:28:07.545 15:46:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:07.545 15:46:37 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:07.545 15:46:37 -- nvmf/common.sh@717 -- # local ip 00:28:07.545 15:46:37 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:07.545 15:46:37 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:07.545 15:46:37 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:07.545 15:46:37 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:07.545 15:46:37 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:07.545 15:46:37 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:07.545 15:46:37 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:07.545 15:46:37 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:07.545 15:46:37 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:07.545 15:46:37 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:28:07.545 15:46:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:07.545 15:46:37 -- common/autotest_common.sh@10 -- # set +x 00:28:07.802 nvme0n1 00:28:07.802 15:46:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:07.802 15:46:38 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:07.802 15:46:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:07.802 15:46:38 -- common/autotest_common.sh@10 -- # set +x 00:28:07.802 15:46:38 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:07.802 15:46:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:07.802 15:46:38 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:07.802 15:46:38 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:07.802 15:46:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:07.802 15:46:38 -- common/autotest_common.sh@10 -- # set +x 00:28:08.060 15:46:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:08.060 15:46:38 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:08.060 15:46:38 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:28:08.060 15:46:38 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:08.060 15:46:38 -- host/auth.sh@44 -- # digest=sha256 00:28:08.060 15:46:38 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:08.060 15:46:38 -- host/auth.sh@44 -- # keyid=1 00:28:08.060 15:46:38 -- host/auth.sh@45 -- # key=DHHC-1:00:Y2E1NjE3NGM4ZDZkOTRhOGJiMjc2ZjYzODRkNThhYTExN2RmMzJkYTE5YmM2OTU0UBZ03g==: 00:28:08.060 15:46:38 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:28:08.060 15:46:38 -- host/auth.sh@48 -- # echo ffdhe8192 00:28:08.060 15:46:38 -- host/auth.sh@49 -- # echo DHHC-1:00:Y2E1NjE3NGM4ZDZkOTRhOGJiMjc2ZjYzODRkNThhYTExN2RmMzJkYTE5YmM2OTU0UBZ03g==: 00:28:08.060 15:46:38 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 1 00:28:08.060 15:46:38 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:08.060 15:46:38 -- host/auth.sh@68 -- # digest=sha256 00:28:08.060 15:46:38 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:28:08.060 15:46:38 -- host/auth.sh@68 -- # keyid=1 00:28:08.060 15:46:38 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:08.060 15:46:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:08.060 15:46:38 -- common/autotest_common.sh@10 -- # set +x 00:28:08.060 15:46:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:08.060 15:46:38 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:08.060 15:46:38 -- nvmf/common.sh@717 -- # local ip 00:28:08.060 15:46:38 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:08.060 15:46:38 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:08.060 15:46:38 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.060 15:46:38 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.060 15:46:38 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:08.060 15:46:38 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.060 15:46:38 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:08.060 15:46:38 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:08.060 15:46:38 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:08.060 15:46:38 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:28:08.060 15:46:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:08.060 15:46:38 -- common/autotest_common.sh@10 -- # set +x 00:28:08.624 nvme0n1 00:28:08.624 15:46:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:08.624 15:46:38 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:08.624 15:46:38 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:08.624 15:46:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:08.624 15:46:38 -- common/autotest_common.sh@10 -- # set +x 00:28:08.624 15:46:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:08.624 15:46:38 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:08.624 15:46:38 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:08.624 15:46:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:08.624 15:46:38 -- common/autotest_common.sh@10 -- # set +x 00:28:08.624 15:46:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:08.624 15:46:38 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:08.624 15:46:38 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:28:08.624 15:46:38 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:08.624 15:46:38 -- host/auth.sh@44 -- # digest=sha256 00:28:08.624 15:46:38 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:08.624 15:46:38 -- host/auth.sh@44 -- # keyid=2 00:28:08.624 15:46:38 -- host/auth.sh@45 -- # key=DHHC-1:01:ZmE2MzM4MmM5NTJlZWUxMWFhYjFkYjhhNTQxNDhmZDLFP6Rf: 00:28:08.624 15:46:38 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:28:08.624 15:46:38 -- host/auth.sh@48 -- # echo ffdhe8192 00:28:08.625 15:46:38 -- host/auth.sh@49 -- # echo DHHC-1:01:ZmE2MzM4MmM5NTJlZWUxMWFhYjFkYjhhNTQxNDhmZDLFP6Rf: 00:28:08.625 15:46:38 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 2 00:28:08.625 15:46:38 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:08.625 15:46:38 -- host/auth.sh@68 -- # digest=sha256 00:28:08.625 15:46:38 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:28:08.625 15:46:38 -- host/auth.sh@68 -- # keyid=2 00:28:08.625 15:46:38 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:08.625 15:46:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:08.625 15:46:38 -- common/autotest_common.sh@10 -- # set +x 00:28:08.625 15:46:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:08.625 15:46:38 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:08.625 15:46:38 -- nvmf/common.sh@717 -- # local ip 00:28:08.625 15:46:38 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:08.625 15:46:38 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:08.625 15:46:38 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:08.625 15:46:38 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:08.625 15:46:38 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:08.625 15:46:38 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:08.625 15:46:38 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:08.625 15:46:38 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:08.625 15:46:38 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:08.625 15:46:38 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:08.625 15:46:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:08.625 15:46:38 -- common/autotest_common.sh@10 -- # set +x 00:28:09.559 nvme0n1 00:28:09.559 15:46:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.559 15:46:39 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:09.559 15:46:39 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:09.559 15:46:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.559 15:46:39 -- common/autotest_common.sh@10 -- # set +x 00:28:09.559 15:46:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.559 15:46:39 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:09.559 15:46:39 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:09.559 15:46:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.559 15:46:39 -- common/autotest_common.sh@10 -- # set +x 00:28:09.559 15:46:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.559 15:46:39 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:09.559 15:46:39 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:28:09.559 15:46:39 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:09.559 15:46:39 -- host/auth.sh@44 -- # digest=sha256 00:28:09.559 15:46:39 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:09.559 15:46:39 -- host/auth.sh@44 -- # keyid=3 00:28:09.559 15:46:39 -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg4ZmIyZGFjNWY1YjVjZjc1MTg5Mzc0MzYzZTA3NjEzM2VkZDU4MTQzNmNkMWI1/bvVZw==: 00:28:09.559 15:46:39 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:28:09.559 15:46:39 -- host/auth.sh@48 -- # echo ffdhe8192 00:28:09.559 15:46:39 -- host/auth.sh@49 -- # echo DHHC-1:02:Mjg4ZmIyZGFjNWY1YjVjZjc1MTg5Mzc0MzYzZTA3NjEzM2VkZDU4MTQzNmNkMWI1/bvVZw==: 00:28:09.559 15:46:39 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 3 00:28:09.559 15:46:39 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:09.559 15:46:39 -- host/auth.sh@68 -- # digest=sha256 00:28:09.559 15:46:39 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:28:09.559 15:46:39 -- host/auth.sh@68 -- # keyid=3 00:28:09.559 15:46:39 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:09.559 15:46:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.559 15:46:39 -- common/autotest_common.sh@10 -- # set +x 00:28:09.559 15:46:39 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:09.559 15:46:39 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:09.559 15:46:39 -- nvmf/common.sh@717 -- # local ip 00:28:09.559 15:46:39 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:09.559 15:46:39 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:09.559 15:46:39 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:09.559 15:46:39 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:09.560 15:46:39 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:09.560 15:46:39 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:09.560 15:46:39 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:09.560 15:46:39 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:09.560 15:46:39 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:09.560 15:46:39 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:28:09.560 15:46:39 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:09.560 15:46:39 -- common/autotest_common.sh@10 -- # set +x 00:28:10.126 nvme0n1 00:28:10.126 15:46:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:10.126 15:46:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.126 15:46:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:10.126 15:46:40 -- common/autotest_common.sh@10 -- # set +x 00:28:10.126 15:46:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:10.126 15:46:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:10.126 15:46:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.126 15:46:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.126 15:46:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:10.126 15:46:40 -- common/autotest_common.sh@10 -- # set +x 00:28:10.126 15:46:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:10.126 15:46:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:10.126 15:46:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:28:10.126 15:46:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:10.126 15:46:40 -- host/auth.sh@44 -- # digest=sha256 00:28:10.126 15:46:40 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:10.126 15:46:40 -- host/auth.sh@44 -- # keyid=4 00:28:10.126 15:46:40 -- host/auth.sh@45 -- # key=DHHC-1:03:YzhlMTZkZDc5MGU2N2VkMDBhODBiNmM5YzdkNjdmNDU2MDcyNjM1YzE2YzE4NzFhNDFmYjM1MGMwODM3MjczMliWbZE=: 00:28:10.126 15:46:40 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:28:10.126 15:46:40 -- host/auth.sh@48 -- # echo ffdhe8192 00:28:10.126 15:46:40 -- host/auth.sh@49 -- # echo DHHC-1:03:YzhlMTZkZDc5MGU2N2VkMDBhODBiNmM5YzdkNjdmNDU2MDcyNjM1YzE2YzE4NzFhNDFmYjM1MGMwODM3MjczMliWbZE=: 00:28:10.126 15:46:40 -- host/auth.sh@111 -- # connect_authenticate sha256 ffdhe8192 4 00:28:10.126 15:46:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:10.126 15:46:40 -- host/auth.sh@68 -- # digest=sha256 00:28:10.126 15:46:40 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:28:10.126 15:46:40 -- host/auth.sh@68 -- # keyid=4 00:28:10.126 15:46:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:10.126 15:46:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:10.126 15:46:40 -- common/autotest_common.sh@10 -- # set +x 00:28:10.126 15:46:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:10.126 15:46:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:10.126 15:46:40 -- nvmf/common.sh@717 -- # local ip 00:28:10.126 15:46:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:10.126 15:46:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:10.126 15:46:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.126 15:46:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.126 15:46:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:10.126 15:46:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.126 15:46:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:10.126 15:46:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:10.126 15:46:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:10.126 15:46:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:10.126 15:46:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:10.126 15:46:40 -- common/autotest_common.sh@10 -- # set +x 00:28:10.692 nvme0n1 00:28:10.692 15:46:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:10.692 15:46:40 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.692 15:46:40 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:10.692 15:46:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:10.692 15:46:40 -- common/autotest_common.sh@10 -- # set +x 00:28:10.692 15:46:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:10.692 15:46:40 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.692 15:46:40 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.692 15:46:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:10.692 15:46:40 -- common/autotest_common.sh@10 -- # set +x 00:28:10.692 15:46:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:10.692 15:46:40 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:28:10.692 15:46:40 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:28:10.692 15:46:40 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:10.692 15:46:40 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:28:10.692 15:46:40 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:10.692 15:46:40 -- host/auth.sh@44 -- # digest=sha384 00:28:10.692 15:46:40 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:10.692 15:46:40 -- host/auth.sh@44 -- # keyid=0 00:28:10.692 15:46:40 -- host/auth.sh@45 -- # key=DHHC-1:00:YjlhZjhkOTgyNThkMzQ0ZTAwMGEzNzQ0OTM2ZjY5MmHvLwiC: 00:28:10.692 15:46:40 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:10.692 15:46:40 -- host/auth.sh@48 -- # echo ffdhe2048 00:28:10.692 15:46:40 -- host/auth.sh@49 -- # echo DHHC-1:00:YjlhZjhkOTgyNThkMzQ0ZTAwMGEzNzQ0OTM2ZjY5MmHvLwiC: 00:28:10.692 15:46:40 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 0 00:28:10.692 15:46:40 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:10.692 15:46:40 -- host/auth.sh@68 -- # digest=sha384 00:28:10.692 15:46:40 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:28:10.692 15:46:40 -- host/auth.sh@68 -- # keyid=0 00:28:10.692 15:46:40 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:10.692 15:46:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:10.692 15:46:40 -- common/autotest_common.sh@10 -- # set +x 00:28:10.692 15:46:40 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:10.692 15:46:40 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:10.692 15:46:40 -- nvmf/common.sh@717 -- # local ip 00:28:10.692 15:46:40 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:10.692 15:46:40 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:10.692 15:46:40 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.692 15:46:40 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.692 15:46:40 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:10.692 15:46:40 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.692 15:46:40 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:10.692 15:46:40 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:10.692 15:46:40 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:10.692 15:46:40 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:28:10.692 15:46:40 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:10.692 15:46:40 -- common/autotest_common.sh@10 -- # set +x 00:28:10.950 nvme0n1 00:28:10.950 15:46:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:10.950 15:46:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:10.950 15:46:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:10.950 15:46:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:10.950 15:46:41 -- common/autotest_common.sh@10 -- # set +x 00:28:10.950 15:46:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:10.950 15:46:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:10.950 15:46:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:10.950 15:46:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:10.950 15:46:41 -- common/autotest_common.sh@10 -- # set +x 00:28:10.950 15:46:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:10.950 15:46:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:10.950 15:46:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:28:10.950 15:46:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:10.950 15:46:41 -- host/auth.sh@44 -- # digest=sha384 00:28:10.950 15:46:41 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:10.950 15:46:41 -- host/auth.sh@44 -- # keyid=1 00:28:10.950 15:46:41 -- host/auth.sh@45 -- # key=DHHC-1:00:Y2E1NjE3NGM4ZDZkOTRhOGJiMjc2ZjYzODRkNThhYTExN2RmMzJkYTE5YmM2OTU0UBZ03g==: 00:28:10.950 15:46:41 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:10.950 15:46:41 -- host/auth.sh@48 -- # echo ffdhe2048 00:28:10.950 15:46:41 -- host/auth.sh@49 -- # echo DHHC-1:00:Y2E1NjE3NGM4ZDZkOTRhOGJiMjc2ZjYzODRkNThhYTExN2RmMzJkYTE5YmM2OTU0UBZ03g==: 00:28:10.950 15:46:41 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 1 00:28:10.950 15:46:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:10.950 15:46:41 -- host/auth.sh@68 -- # digest=sha384 00:28:10.950 15:46:41 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:28:10.950 15:46:41 -- host/auth.sh@68 -- # keyid=1 00:28:10.950 15:46:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:10.950 15:46:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:10.950 15:46:41 -- common/autotest_common.sh@10 -- # set +x 00:28:10.950 15:46:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:10.950 15:46:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:10.950 15:46:41 -- nvmf/common.sh@717 -- # local ip 00:28:10.950 15:46:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:10.950 15:46:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:10.950 15:46:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:10.950 15:46:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:10.950 15:46:41 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:10.950 15:46:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:10.950 15:46:41 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:10.950 15:46:41 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:10.950 15:46:41 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:10.950 15:46:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:28:10.950 15:46:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:10.950 15:46:41 -- common/autotest_common.sh@10 -- # set +x 00:28:11.207 nvme0n1 00:28:11.207 15:46:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.207 15:46:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.207 15:46:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.207 15:46:41 -- common/autotest_common.sh@10 -- # set +x 00:28:11.207 15:46:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:11.207 15:46:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.207 15:46:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.207 15:46:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.207 15:46:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.207 15:46:41 -- common/autotest_common.sh@10 -- # set +x 00:28:11.207 15:46:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.207 15:46:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:11.207 15:46:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:28:11.207 15:46:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:11.207 15:46:41 -- host/auth.sh@44 -- # digest=sha384 00:28:11.207 15:46:41 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:11.207 15:46:41 -- host/auth.sh@44 -- # keyid=2 00:28:11.207 15:46:41 -- host/auth.sh@45 -- # key=DHHC-1:01:ZmE2MzM4MmM5NTJlZWUxMWFhYjFkYjhhNTQxNDhmZDLFP6Rf: 00:28:11.207 15:46:41 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:11.207 15:46:41 -- host/auth.sh@48 -- # echo ffdhe2048 00:28:11.207 15:46:41 -- host/auth.sh@49 -- # echo DHHC-1:01:ZmE2MzM4MmM5NTJlZWUxMWFhYjFkYjhhNTQxNDhmZDLFP6Rf: 00:28:11.207 15:46:41 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 2 00:28:11.207 15:46:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:11.207 15:46:41 -- host/auth.sh@68 -- # digest=sha384 00:28:11.207 15:46:41 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:28:11.207 15:46:41 -- host/auth.sh@68 -- # keyid=2 00:28:11.207 15:46:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:11.207 15:46:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.207 15:46:41 -- common/autotest_common.sh@10 -- # set +x 00:28:11.207 15:46:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.207 15:46:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:11.207 15:46:41 -- nvmf/common.sh@717 -- # local ip 00:28:11.207 15:46:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:11.207 15:46:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:11.207 15:46:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.207 15:46:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.207 15:46:41 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:11.207 15:46:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.207 15:46:41 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:11.207 15:46:41 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:11.207 15:46:41 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:11.207 15:46:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:11.207 15:46:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.207 15:46:41 -- common/autotest_common.sh@10 -- # set +x 00:28:11.207 nvme0n1 00:28:11.207 15:46:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.207 15:46:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:11.207 15:46:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.207 15:46:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.207 15:46:41 -- common/autotest_common.sh@10 -- # set +x 00:28:11.207 15:46:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.466 15:46:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.466 15:46:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.466 15:46:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.466 15:46:41 -- common/autotest_common.sh@10 -- # set +x 00:28:11.466 15:46:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.466 15:46:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:11.466 15:46:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:28:11.466 15:46:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:11.466 15:46:41 -- host/auth.sh@44 -- # digest=sha384 00:28:11.466 15:46:41 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:11.466 15:46:41 -- host/auth.sh@44 -- # keyid=3 00:28:11.466 15:46:41 -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg4ZmIyZGFjNWY1YjVjZjc1MTg5Mzc0MzYzZTA3NjEzM2VkZDU4MTQzNmNkMWI1/bvVZw==: 00:28:11.466 15:46:41 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:11.466 15:46:41 -- host/auth.sh@48 -- # echo ffdhe2048 00:28:11.466 15:46:41 -- host/auth.sh@49 -- # echo DHHC-1:02:Mjg4ZmIyZGFjNWY1YjVjZjc1MTg5Mzc0MzYzZTA3NjEzM2VkZDU4MTQzNmNkMWI1/bvVZw==: 00:28:11.466 15:46:41 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 3 00:28:11.466 15:46:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:11.466 15:46:41 -- host/auth.sh@68 -- # digest=sha384 00:28:11.466 15:46:41 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:28:11.466 15:46:41 -- host/auth.sh@68 -- # keyid=3 00:28:11.466 15:46:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:11.466 15:46:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.466 15:46:41 -- common/autotest_common.sh@10 -- # set +x 00:28:11.466 15:46:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.466 15:46:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:11.466 15:46:41 -- nvmf/common.sh@717 -- # local ip 00:28:11.466 15:46:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:11.466 15:46:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:11.466 15:46:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.466 15:46:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.466 15:46:41 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:11.466 15:46:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.466 15:46:41 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:11.466 15:46:41 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:11.466 15:46:41 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:11.466 15:46:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:28:11.466 15:46:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.466 15:46:41 -- common/autotest_common.sh@10 -- # set +x 00:28:11.466 nvme0n1 00:28:11.466 15:46:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.466 15:46:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.466 15:46:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:11.466 15:46:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.466 15:46:41 -- common/autotest_common.sh@10 -- # set +x 00:28:11.466 15:46:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.466 15:46:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.466 15:46:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.466 15:46:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.466 15:46:41 -- common/autotest_common.sh@10 -- # set +x 00:28:11.466 15:46:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.466 15:46:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:11.466 15:46:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:28:11.466 15:46:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:11.466 15:46:41 -- host/auth.sh@44 -- # digest=sha384 00:28:11.466 15:46:41 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:11.466 15:46:41 -- host/auth.sh@44 -- # keyid=4 00:28:11.466 15:46:41 -- host/auth.sh@45 -- # key=DHHC-1:03:YzhlMTZkZDc5MGU2N2VkMDBhODBiNmM5YzdkNjdmNDU2MDcyNjM1YzE2YzE4NzFhNDFmYjM1MGMwODM3MjczMliWbZE=: 00:28:11.466 15:46:41 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:11.466 15:46:41 -- host/auth.sh@48 -- # echo ffdhe2048 00:28:11.466 15:46:41 -- host/auth.sh@49 -- # echo DHHC-1:03:YzhlMTZkZDc5MGU2N2VkMDBhODBiNmM5YzdkNjdmNDU2MDcyNjM1YzE2YzE4NzFhNDFmYjM1MGMwODM3MjczMliWbZE=: 00:28:11.466 15:46:41 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe2048 4 00:28:11.466 15:46:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:11.466 15:46:41 -- host/auth.sh@68 -- # digest=sha384 00:28:11.466 15:46:41 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:28:11.466 15:46:41 -- host/auth.sh@68 -- # keyid=4 00:28:11.466 15:46:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:11.466 15:46:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.466 15:46:41 -- common/autotest_common.sh@10 -- # set +x 00:28:11.466 15:46:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.466 15:46:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:11.466 15:46:41 -- nvmf/common.sh@717 -- # local ip 00:28:11.466 15:46:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:11.466 15:46:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:11.466 15:46:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.466 15:46:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.466 15:46:41 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:11.466 15:46:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.466 15:46:41 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:11.466 15:46:41 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:11.466 15:46:41 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:11.466 15:46:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:11.466 15:46:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.466 15:46:41 -- common/autotest_common.sh@10 -- # set +x 00:28:11.728 nvme0n1 00:28:11.728 15:46:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.728 15:46:41 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.728 15:46:41 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:11.728 15:46:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.728 15:46:41 -- common/autotest_common.sh@10 -- # set +x 00:28:11.728 15:46:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.728 15:46:41 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.728 15:46:41 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.728 15:46:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.728 15:46:41 -- common/autotest_common.sh@10 -- # set +x 00:28:11.728 15:46:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.728 15:46:41 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:28:11.728 15:46:41 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:11.728 15:46:41 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:28:11.728 15:46:41 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:11.728 15:46:41 -- host/auth.sh@44 -- # digest=sha384 00:28:11.728 15:46:41 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:11.728 15:46:41 -- host/auth.sh@44 -- # keyid=0 00:28:11.728 15:46:41 -- host/auth.sh@45 -- # key=DHHC-1:00:YjlhZjhkOTgyNThkMzQ0ZTAwMGEzNzQ0OTM2ZjY5MmHvLwiC: 00:28:11.728 15:46:41 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:11.728 15:46:41 -- host/auth.sh@48 -- # echo ffdhe3072 00:28:11.728 15:46:41 -- host/auth.sh@49 -- # echo DHHC-1:00:YjlhZjhkOTgyNThkMzQ0ZTAwMGEzNzQ0OTM2ZjY5MmHvLwiC: 00:28:11.728 15:46:41 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 0 00:28:11.728 15:46:41 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:11.728 15:46:41 -- host/auth.sh@68 -- # digest=sha384 00:28:11.728 15:46:41 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:28:11.728 15:46:41 -- host/auth.sh@68 -- # keyid=0 00:28:11.728 15:46:41 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:11.728 15:46:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.728 15:46:41 -- common/autotest_common.sh@10 -- # set +x 00:28:11.728 15:46:41 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.728 15:46:41 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:11.728 15:46:41 -- nvmf/common.sh@717 -- # local ip 00:28:11.728 15:46:41 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:11.728 15:46:41 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:11.728 15:46:41 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.728 15:46:41 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.728 15:46:41 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:11.728 15:46:41 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.728 15:46:41 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:11.728 15:46:41 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:11.728 15:46:41 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:11.728 15:46:41 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:28:11.728 15:46:41 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.728 15:46:41 -- common/autotest_common.sh@10 -- # set +x 00:28:11.986 nvme0n1 00:28:11.986 15:46:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.986 15:46:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.986 15:46:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.986 15:46:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:11.986 15:46:42 -- common/autotest_common.sh@10 -- # set +x 00:28:11.986 15:46:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.986 15:46:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.986 15:46:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.986 15:46:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.986 15:46:42 -- common/autotest_common.sh@10 -- # set +x 00:28:11.986 15:46:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.986 15:46:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:11.986 15:46:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:28:11.986 15:46:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:11.986 15:46:42 -- host/auth.sh@44 -- # digest=sha384 00:28:11.986 15:46:42 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:11.986 15:46:42 -- host/auth.sh@44 -- # keyid=1 00:28:11.986 15:46:42 -- host/auth.sh@45 -- # key=DHHC-1:00:Y2E1NjE3NGM4ZDZkOTRhOGJiMjc2ZjYzODRkNThhYTExN2RmMzJkYTE5YmM2OTU0UBZ03g==: 00:28:11.986 15:46:42 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:11.986 15:46:42 -- host/auth.sh@48 -- # echo ffdhe3072 00:28:11.986 15:46:42 -- host/auth.sh@49 -- # echo DHHC-1:00:Y2E1NjE3NGM4ZDZkOTRhOGJiMjc2ZjYzODRkNThhYTExN2RmMzJkYTE5YmM2OTU0UBZ03g==: 00:28:11.986 15:46:42 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 1 00:28:11.986 15:46:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:11.986 15:46:42 -- host/auth.sh@68 -- # digest=sha384 00:28:11.986 15:46:42 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:28:11.986 15:46:42 -- host/auth.sh@68 -- # keyid=1 00:28:11.986 15:46:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:11.986 15:46:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.986 15:46:42 -- common/autotest_common.sh@10 -- # set +x 00:28:11.986 15:46:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.986 15:46:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:11.986 15:46:42 -- nvmf/common.sh@717 -- # local ip 00:28:11.986 15:46:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:11.986 15:46:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:11.986 15:46:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:11.986 15:46:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:11.986 15:46:42 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:11.986 15:46:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:11.986 15:46:42 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:11.986 15:46:42 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:11.987 15:46:42 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:11.987 15:46:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:28:11.987 15:46:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.987 15:46:42 -- common/autotest_common.sh@10 -- # set +x 00:28:11.987 nvme0n1 00:28:11.987 15:46:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.987 15:46:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:11.987 15:46:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:11.987 15:46:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.987 15:46:42 -- common/autotest_common.sh@10 -- # set +x 00:28:11.987 15:46:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:11.987 15:46:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:11.987 15:46:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:11.987 15:46:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:11.987 15:46:42 -- common/autotest_common.sh@10 -- # set +x 00:28:12.245 15:46:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:12.245 15:46:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:12.245 15:46:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:28:12.245 15:46:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:12.245 15:46:42 -- host/auth.sh@44 -- # digest=sha384 00:28:12.245 15:46:42 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:12.245 15:46:42 -- host/auth.sh@44 -- # keyid=2 00:28:12.245 15:46:42 -- host/auth.sh@45 -- # key=DHHC-1:01:ZmE2MzM4MmM5NTJlZWUxMWFhYjFkYjhhNTQxNDhmZDLFP6Rf: 00:28:12.245 15:46:42 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:12.245 15:46:42 -- host/auth.sh@48 -- # echo ffdhe3072 00:28:12.245 15:46:42 -- host/auth.sh@49 -- # echo DHHC-1:01:ZmE2MzM4MmM5NTJlZWUxMWFhYjFkYjhhNTQxNDhmZDLFP6Rf: 00:28:12.245 15:46:42 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 2 00:28:12.245 15:46:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:12.245 15:46:42 -- host/auth.sh@68 -- # digest=sha384 00:28:12.245 15:46:42 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:28:12.245 15:46:42 -- host/auth.sh@68 -- # keyid=2 00:28:12.246 15:46:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:12.246 15:46:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:12.246 15:46:42 -- common/autotest_common.sh@10 -- # set +x 00:28:12.246 15:46:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:12.246 15:46:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:12.246 15:46:42 -- nvmf/common.sh@717 -- # local ip 00:28:12.246 15:46:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:12.246 15:46:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:12.246 15:46:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.246 15:46:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.246 15:46:42 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:12.246 15:46:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.246 15:46:42 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:12.246 15:46:42 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:12.246 15:46:42 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:12.246 15:46:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:12.246 15:46:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:12.246 15:46:42 -- common/autotest_common.sh@10 -- # set +x 00:28:12.246 nvme0n1 00:28:12.246 15:46:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:12.246 15:46:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.246 15:46:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:12.246 15:46:42 -- common/autotest_common.sh@10 -- # set +x 00:28:12.246 15:46:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:12.246 15:46:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:12.246 15:46:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.246 15:46:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.246 15:46:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:12.246 15:46:42 -- common/autotest_common.sh@10 -- # set +x 00:28:12.246 15:46:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:12.246 15:46:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:12.246 15:46:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:28:12.246 15:46:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:12.246 15:46:42 -- host/auth.sh@44 -- # digest=sha384 00:28:12.246 15:46:42 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:12.246 15:46:42 -- host/auth.sh@44 -- # keyid=3 00:28:12.246 15:46:42 -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg4ZmIyZGFjNWY1YjVjZjc1MTg5Mzc0MzYzZTA3NjEzM2VkZDU4MTQzNmNkMWI1/bvVZw==: 00:28:12.246 15:46:42 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:12.246 15:46:42 -- host/auth.sh@48 -- # echo ffdhe3072 00:28:12.246 15:46:42 -- host/auth.sh@49 -- # echo DHHC-1:02:Mjg4ZmIyZGFjNWY1YjVjZjc1MTg5Mzc0MzYzZTA3NjEzM2VkZDU4MTQzNmNkMWI1/bvVZw==: 00:28:12.246 15:46:42 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 3 00:28:12.246 15:46:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:12.246 15:46:42 -- host/auth.sh@68 -- # digest=sha384 00:28:12.246 15:46:42 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:28:12.246 15:46:42 -- host/auth.sh@68 -- # keyid=3 00:28:12.246 15:46:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:12.246 15:46:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:12.246 15:46:42 -- common/autotest_common.sh@10 -- # set +x 00:28:12.246 15:46:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:12.246 15:46:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:12.246 15:46:42 -- nvmf/common.sh@717 -- # local ip 00:28:12.246 15:46:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:12.246 15:46:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:12.246 15:46:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.246 15:46:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.246 15:46:42 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:12.246 15:46:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.246 15:46:42 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:12.246 15:46:42 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:12.246 15:46:42 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:12.246 15:46:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:28:12.246 15:46:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:12.246 15:46:42 -- common/autotest_common.sh@10 -- # set +x 00:28:12.505 nvme0n1 00:28:12.505 15:46:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:12.505 15:46:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.505 15:46:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:12.505 15:46:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:12.505 15:46:42 -- common/autotest_common.sh@10 -- # set +x 00:28:12.505 15:46:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:12.505 15:46:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.505 15:46:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.505 15:46:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:12.505 15:46:42 -- common/autotest_common.sh@10 -- # set +x 00:28:12.505 15:46:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:12.505 15:46:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:12.505 15:46:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:28:12.505 15:46:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:12.505 15:46:42 -- host/auth.sh@44 -- # digest=sha384 00:28:12.505 15:46:42 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:12.505 15:46:42 -- host/auth.sh@44 -- # keyid=4 00:28:12.505 15:46:42 -- host/auth.sh@45 -- # key=DHHC-1:03:YzhlMTZkZDc5MGU2N2VkMDBhODBiNmM5YzdkNjdmNDU2MDcyNjM1YzE2YzE4NzFhNDFmYjM1MGMwODM3MjczMliWbZE=: 00:28:12.505 15:46:42 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:12.505 15:46:42 -- host/auth.sh@48 -- # echo ffdhe3072 00:28:12.505 15:46:42 -- host/auth.sh@49 -- # echo DHHC-1:03:YzhlMTZkZDc5MGU2N2VkMDBhODBiNmM5YzdkNjdmNDU2MDcyNjM1YzE2YzE4NzFhNDFmYjM1MGMwODM3MjczMliWbZE=: 00:28:12.505 15:46:42 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe3072 4 00:28:12.505 15:46:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:12.505 15:46:42 -- host/auth.sh@68 -- # digest=sha384 00:28:12.505 15:46:42 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:28:12.505 15:46:42 -- host/auth.sh@68 -- # keyid=4 00:28:12.505 15:46:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:12.505 15:46:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:12.505 15:46:42 -- common/autotest_common.sh@10 -- # set +x 00:28:12.505 15:46:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:12.505 15:46:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:12.505 15:46:42 -- nvmf/common.sh@717 -- # local ip 00:28:12.505 15:46:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:12.505 15:46:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:12.505 15:46:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.505 15:46:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.505 15:46:42 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:12.505 15:46:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.505 15:46:42 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:12.505 15:46:42 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:12.505 15:46:42 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:12.505 15:46:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:12.505 15:46:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:12.505 15:46:42 -- common/autotest_common.sh@10 -- # set +x 00:28:12.763 nvme0n1 00:28:12.763 15:46:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:12.763 15:46:42 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:12.763 15:46:42 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:12.763 15:46:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:12.763 15:46:42 -- common/autotest_common.sh@10 -- # set +x 00:28:12.763 15:46:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:12.763 15:46:42 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:12.763 15:46:42 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:12.763 15:46:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:12.763 15:46:42 -- common/autotest_common.sh@10 -- # set +x 00:28:12.763 15:46:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:12.763 15:46:42 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:28:12.763 15:46:42 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:12.763 15:46:42 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:28:12.763 15:46:42 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:12.763 15:46:42 -- host/auth.sh@44 -- # digest=sha384 00:28:12.763 15:46:42 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:12.763 15:46:42 -- host/auth.sh@44 -- # keyid=0 00:28:12.763 15:46:42 -- host/auth.sh@45 -- # key=DHHC-1:00:YjlhZjhkOTgyNThkMzQ0ZTAwMGEzNzQ0OTM2ZjY5MmHvLwiC: 00:28:12.763 15:46:42 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:12.763 15:46:42 -- host/auth.sh@48 -- # echo ffdhe4096 00:28:12.763 15:46:42 -- host/auth.sh@49 -- # echo DHHC-1:00:YjlhZjhkOTgyNThkMzQ0ZTAwMGEzNzQ0OTM2ZjY5MmHvLwiC: 00:28:12.763 15:46:42 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 0 00:28:12.763 15:46:42 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:12.763 15:46:42 -- host/auth.sh@68 -- # digest=sha384 00:28:12.763 15:46:42 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:28:12.763 15:46:42 -- host/auth.sh@68 -- # keyid=0 00:28:12.763 15:46:42 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:12.763 15:46:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:12.763 15:46:42 -- common/autotest_common.sh@10 -- # set +x 00:28:12.763 15:46:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:12.763 15:46:42 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:12.763 15:46:42 -- nvmf/common.sh@717 -- # local ip 00:28:12.763 15:46:42 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:12.763 15:46:42 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:12.763 15:46:42 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:12.763 15:46:42 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:12.763 15:46:42 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:12.763 15:46:42 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:12.763 15:46:42 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:12.764 15:46:42 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:12.764 15:46:42 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:12.764 15:46:42 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:28:12.764 15:46:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:12.764 15:46:42 -- common/autotest_common.sh@10 -- # set +x 00:28:13.022 nvme0n1 00:28:13.022 15:46:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.022 15:46:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.022 15:46:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:13.022 15:46:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.022 15:46:43 -- common/autotest_common.sh@10 -- # set +x 00:28:13.022 15:46:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.022 15:46:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.022 15:46:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.022 15:46:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.022 15:46:43 -- common/autotest_common.sh@10 -- # set +x 00:28:13.022 15:46:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.022 15:46:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:13.022 15:46:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:28:13.022 15:46:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:13.022 15:46:43 -- host/auth.sh@44 -- # digest=sha384 00:28:13.022 15:46:43 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:13.022 15:46:43 -- host/auth.sh@44 -- # keyid=1 00:28:13.022 15:46:43 -- host/auth.sh@45 -- # key=DHHC-1:00:Y2E1NjE3NGM4ZDZkOTRhOGJiMjc2ZjYzODRkNThhYTExN2RmMzJkYTE5YmM2OTU0UBZ03g==: 00:28:13.022 15:46:43 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:13.022 15:46:43 -- host/auth.sh@48 -- # echo ffdhe4096 00:28:13.022 15:46:43 -- host/auth.sh@49 -- # echo DHHC-1:00:Y2E1NjE3NGM4ZDZkOTRhOGJiMjc2ZjYzODRkNThhYTExN2RmMzJkYTE5YmM2OTU0UBZ03g==: 00:28:13.022 15:46:43 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 1 00:28:13.022 15:46:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:13.022 15:46:43 -- host/auth.sh@68 -- # digest=sha384 00:28:13.022 15:46:43 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:28:13.022 15:46:43 -- host/auth.sh@68 -- # keyid=1 00:28:13.022 15:46:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:13.022 15:46:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.022 15:46:43 -- common/autotest_common.sh@10 -- # set +x 00:28:13.022 15:46:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.022 15:46:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:13.022 15:46:43 -- nvmf/common.sh@717 -- # local ip 00:28:13.022 15:46:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:13.022 15:46:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:13.022 15:46:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.022 15:46:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.022 15:46:43 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:13.022 15:46:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.022 15:46:43 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:13.022 15:46:43 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:13.022 15:46:43 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:13.022 15:46:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:28:13.022 15:46:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.022 15:46:43 -- common/autotest_common.sh@10 -- # set +x 00:28:13.280 nvme0n1 00:28:13.280 15:46:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.280 15:46:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.280 15:46:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.280 15:46:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:13.280 15:46:43 -- common/autotest_common.sh@10 -- # set +x 00:28:13.280 15:46:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.280 15:46:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.280 15:46:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.280 15:46:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.280 15:46:43 -- common/autotest_common.sh@10 -- # set +x 00:28:13.281 15:46:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.281 15:46:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:13.281 15:46:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:28:13.281 15:46:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:13.281 15:46:43 -- host/auth.sh@44 -- # digest=sha384 00:28:13.281 15:46:43 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:13.281 15:46:43 -- host/auth.sh@44 -- # keyid=2 00:28:13.281 15:46:43 -- host/auth.sh@45 -- # key=DHHC-1:01:ZmE2MzM4MmM5NTJlZWUxMWFhYjFkYjhhNTQxNDhmZDLFP6Rf: 00:28:13.281 15:46:43 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:13.281 15:46:43 -- host/auth.sh@48 -- # echo ffdhe4096 00:28:13.281 15:46:43 -- host/auth.sh@49 -- # echo DHHC-1:01:ZmE2MzM4MmM5NTJlZWUxMWFhYjFkYjhhNTQxNDhmZDLFP6Rf: 00:28:13.281 15:46:43 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 2 00:28:13.281 15:46:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:13.281 15:46:43 -- host/auth.sh@68 -- # digest=sha384 00:28:13.281 15:46:43 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:28:13.281 15:46:43 -- host/auth.sh@68 -- # keyid=2 00:28:13.281 15:46:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:13.281 15:46:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.281 15:46:43 -- common/autotest_common.sh@10 -- # set +x 00:28:13.281 15:46:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.281 15:46:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:13.281 15:46:43 -- nvmf/common.sh@717 -- # local ip 00:28:13.281 15:46:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:13.281 15:46:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:13.281 15:46:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.281 15:46:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.281 15:46:43 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:13.281 15:46:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.281 15:46:43 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:13.281 15:46:43 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:13.281 15:46:43 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:13.281 15:46:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:13.281 15:46:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.281 15:46:43 -- common/autotest_common.sh@10 -- # set +x 00:28:13.539 nvme0n1 00:28:13.539 15:46:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.539 15:46:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.539 15:46:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.539 15:46:43 -- common/autotest_common.sh@10 -- # set +x 00:28:13.539 15:46:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:13.539 15:46:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.539 15:46:43 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.539 15:46:43 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.539 15:46:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.539 15:46:43 -- common/autotest_common.sh@10 -- # set +x 00:28:13.539 15:46:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.539 15:46:43 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:13.539 15:46:43 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:28:13.539 15:46:43 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:13.539 15:46:43 -- host/auth.sh@44 -- # digest=sha384 00:28:13.539 15:46:43 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:13.539 15:46:43 -- host/auth.sh@44 -- # keyid=3 00:28:13.539 15:46:43 -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg4ZmIyZGFjNWY1YjVjZjc1MTg5Mzc0MzYzZTA3NjEzM2VkZDU4MTQzNmNkMWI1/bvVZw==: 00:28:13.539 15:46:43 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:13.539 15:46:43 -- host/auth.sh@48 -- # echo ffdhe4096 00:28:13.539 15:46:43 -- host/auth.sh@49 -- # echo DHHC-1:02:Mjg4ZmIyZGFjNWY1YjVjZjc1MTg5Mzc0MzYzZTA3NjEzM2VkZDU4MTQzNmNkMWI1/bvVZw==: 00:28:13.539 15:46:43 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 3 00:28:13.539 15:46:43 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:13.539 15:46:43 -- host/auth.sh@68 -- # digest=sha384 00:28:13.539 15:46:43 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:28:13.539 15:46:43 -- host/auth.sh@68 -- # keyid=3 00:28:13.539 15:46:43 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:13.539 15:46:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.539 15:46:43 -- common/autotest_common.sh@10 -- # set +x 00:28:13.539 15:46:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.539 15:46:43 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:13.539 15:46:43 -- nvmf/common.sh@717 -- # local ip 00:28:13.539 15:46:43 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:13.539 15:46:43 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:13.539 15:46:43 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.539 15:46:43 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.539 15:46:43 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:13.539 15:46:43 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.539 15:46:43 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:13.539 15:46:43 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:13.539 15:46:43 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:13.539 15:46:43 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:28:13.539 15:46:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.539 15:46:43 -- common/autotest_common.sh@10 -- # set +x 00:28:13.797 nvme0n1 00:28:13.797 15:46:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.797 15:46:43 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:13.797 15:46:43 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.797 15:46:43 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:13.797 15:46:43 -- common/autotest_common.sh@10 -- # set +x 00:28:13.797 15:46:43 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.797 15:46:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:13.797 15:46:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:13.797 15:46:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.797 15:46:44 -- common/autotest_common.sh@10 -- # set +x 00:28:13.797 15:46:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.797 15:46:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:13.797 15:46:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:28:13.797 15:46:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:13.797 15:46:44 -- host/auth.sh@44 -- # digest=sha384 00:28:13.797 15:46:44 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:13.797 15:46:44 -- host/auth.sh@44 -- # keyid=4 00:28:13.797 15:46:44 -- host/auth.sh@45 -- # key=DHHC-1:03:YzhlMTZkZDc5MGU2N2VkMDBhODBiNmM5YzdkNjdmNDU2MDcyNjM1YzE2YzE4NzFhNDFmYjM1MGMwODM3MjczMliWbZE=: 00:28:13.797 15:46:44 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:13.797 15:46:44 -- host/auth.sh@48 -- # echo ffdhe4096 00:28:13.797 15:46:44 -- host/auth.sh@49 -- # echo DHHC-1:03:YzhlMTZkZDc5MGU2N2VkMDBhODBiNmM5YzdkNjdmNDU2MDcyNjM1YzE2YzE4NzFhNDFmYjM1MGMwODM3MjczMliWbZE=: 00:28:13.797 15:46:44 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe4096 4 00:28:13.797 15:46:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:13.797 15:46:44 -- host/auth.sh@68 -- # digest=sha384 00:28:13.797 15:46:44 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:28:13.797 15:46:44 -- host/auth.sh@68 -- # keyid=4 00:28:13.797 15:46:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:13.797 15:46:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.797 15:46:44 -- common/autotest_common.sh@10 -- # set +x 00:28:13.797 15:46:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:13.797 15:46:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:13.797 15:46:44 -- nvmf/common.sh@717 -- # local ip 00:28:13.797 15:46:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:13.797 15:46:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:13.797 15:46:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:13.797 15:46:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:13.797 15:46:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:13.797 15:46:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:13.797 15:46:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:13.797 15:46:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:13.797 15:46:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:13.797 15:46:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:13.797 15:46:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:13.797 15:46:44 -- common/autotest_common.sh@10 -- # set +x 00:28:14.056 nvme0n1 00:28:14.056 15:46:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:14.056 15:46:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.056 15:46:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:14.056 15:46:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:14.056 15:46:44 -- common/autotest_common.sh@10 -- # set +x 00:28:14.056 15:46:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:14.056 15:46:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.056 15:46:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.056 15:46:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:14.056 15:46:44 -- common/autotest_common.sh@10 -- # set +x 00:28:14.056 15:46:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:14.056 15:46:44 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:28:14.056 15:46:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:14.056 15:46:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:28:14.056 15:46:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:14.056 15:46:44 -- host/auth.sh@44 -- # digest=sha384 00:28:14.056 15:46:44 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:14.056 15:46:44 -- host/auth.sh@44 -- # keyid=0 00:28:14.056 15:46:44 -- host/auth.sh@45 -- # key=DHHC-1:00:YjlhZjhkOTgyNThkMzQ0ZTAwMGEzNzQ0OTM2ZjY5MmHvLwiC: 00:28:14.056 15:46:44 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:14.056 15:46:44 -- host/auth.sh@48 -- # echo ffdhe6144 00:28:14.056 15:46:44 -- host/auth.sh@49 -- # echo DHHC-1:00:YjlhZjhkOTgyNThkMzQ0ZTAwMGEzNzQ0OTM2ZjY5MmHvLwiC: 00:28:14.056 15:46:44 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 0 00:28:14.056 15:46:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:14.056 15:46:44 -- host/auth.sh@68 -- # digest=sha384 00:28:14.056 15:46:44 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:28:14.056 15:46:44 -- host/auth.sh@68 -- # keyid=0 00:28:14.056 15:46:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:14.056 15:46:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:14.056 15:46:44 -- common/autotest_common.sh@10 -- # set +x 00:28:14.056 15:46:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:14.056 15:46:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:14.056 15:46:44 -- nvmf/common.sh@717 -- # local ip 00:28:14.056 15:46:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:14.056 15:46:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:14.056 15:46:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.056 15:46:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.056 15:46:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:14.056 15:46:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.056 15:46:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:14.056 15:46:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:14.056 15:46:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:14.056 15:46:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:28:14.056 15:46:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:14.056 15:46:44 -- common/autotest_common.sh@10 -- # set +x 00:28:14.622 nvme0n1 00:28:14.622 15:46:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:14.622 15:46:44 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.622 15:46:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:14.622 15:46:44 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:14.622 15:46:44 -- common/autotest_common.sh@10 -- # set +x 00:28:14.622 15:46:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:14.622 15:46:44 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.622 15:46:44 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.622 15:46:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:14.622 15:46:44 -- common/autotest_common.sh@10 -- # set +x 00:28:14.622 15:46:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:14.622 15:46:44 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:14.622 15:46:44 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:28:14.622 15:46:44 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:14.622 15:46:44 -- host/auth.sh@44 -- # digest=sha384 00:28:14.622 15:46:44 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:14.622 15:46:44 -- host/auth.sh@44 -- # keyid=1 00:28:14.622 15:46:44 -- host/auth.sh@45 -- # key=DHHC-1:00:Y2E1NjE3NGM4ZDZkOTRhOGJiMjc2ZjYzODRkNThhYTExN2RmMzJkYTE5YmM2OTU0UBZ03g==: 00:28:14.622 15:46:44 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:14.622 15:46:44 -- host/auth.sh@48 -- # echo ffdhe6144 00:28:14.622 15:46:44 -- host/auth.sh@49 -- # echo DHHC-1:00:Y2E1NjE3NGM4ZDZkOTRhOGJiMjc2ZjYzODRkNThhYTExN2RmMzJkYTE5YmM2OTU0UBZ03g==: 00:28:14.622 15:46:44 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 1 00:28:14.622 15:46:44 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:14.622 15:46:44 -- host/auth.sh@68 -- # digest=sha384 00:28:14.622 15:46:44 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:28:14.622 15:46:44 -- host/auth.sh@68 -- # keyid=1 00:28:14.622 15:46:44 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:14.622 15:46:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:14.622 15:46:44 -- common/autotest_common.sh@10 -- # set +x 00:28:14.622 15:46:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:14.622 15:46:44 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:14.622 15:46:44 -- nvmf/common.sh@717 -- # local ip 00:28:14.622 15:46:44 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:14.622 15:46:44 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:14.622 15:46:44 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:14.622 15:46:44 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:14.622 15:46:44 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:14.622 15:46:44 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:14.622 15:46:44 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:14.622 15:46:44 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:14.622 15:46:44 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:14.622 15:46:44 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:28:14.622 15:46:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:14.622 15:46:44 -- common/autotest_common.sh@10 -- # set +x 00:28:14.880 nvme0n1 00:28:14.880 15:46:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:14.880 15:46:45 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:14.880 15:46:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:14.880 15:46:45 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:14.880 15:46:45 -- common/autotest_common.sh@10 -- # set +x 00:28:14.880 15:46:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:14.880 15:46:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:14.880 15:46:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:14.880 15:46:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:14.880 15:46:45 -- common/autotest_common.sh@10 -- # set +x 00:28:14.880 15:46:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:14.880 15:46:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:14.880 15:46:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:28:14.880 15:46:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:14.880 15:46:45 -- host/auth.sh@44 -- # digest=sha384 00:28:14.880 15:46:45 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:14.880 15:46:45 -- host/auth.sh@44 -- # keyid=2 00:28:14.880 15:46:45 -- host/auth.sh@45 -- # key=DHHC-1:01:ZmE2MzM4MmM5NTJlZWUxMWFhYjFkYjhhNTQxNDhmZDLFP6Rf: 00:28:14.880 15:46:45 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:14.880 15:46:45 -- host/auth.sh@48 -- # echo ffdhe6144 00:28:14.880 15:46:45 -- host/auth.sh@49 -- # echo DHHC-1:01:ZmE2MzM4MmM5NTJlZWUxMWFhYjFkYjhhNTQxNDhmZDLFP6Rf: 00:28:14.880 15:46:45 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 2 00:28:14.880 15:46:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:14.880 15:46:45 -- host/auth.sh@68 -- # digest=sha384 00:28:14.880 15:46:45 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:28:14.880 15:46:45 -- host/auth.sh@68 -- # keyid=2 00:28:14.880 15:46:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:14.880 15:46:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:14.880 15:46:45 -- common/autotest_common.sh@10 -- # set +x 00:28:15.138 15:46:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.138 15:46:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:15.138 15:46:45 -- nvmf/common.sh@717 -- # local ip 00:28:15.138 15:46:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:15.138 15:46:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:15.138 15:46:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.138 15:46:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.138 15:46:45 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:15.138 15:46:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.138 15:46:45 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:15.138 15:46:45 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:15.138 15:46:45 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:15.138 15:46:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:15.138 15:46:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.138 15:46:45 -- common/autotest_common.sh@10 -- # set +x 00:28:15.397 nvme0n1 00:28:15.397 15:46:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.397 15:46:45 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.397 15:46:45 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:15.397 15:46:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.397 15:46:45 -- common/autotest_common.sh@10 -- # set +x 00:28:15.397 15:46:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.397 15:46:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.397 15:46:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.397 15:46:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.397 15:46:45 -- common/autotest_common.sh@10 -- # set +x 00:28:15.397 15:46:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.397 15:46:45 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:15.397 15:46:45 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:28:15.397 15:46:45 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:15.397 15:46:45 -- host/auth.sh@44 -- # digest=sha384 00:28:15.397 15:46:45 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:15.397 15:46:45 -- host/auth.sh@44 -- # keyid=3 00:28:15.397 15:46:45 -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg4ZmIyZGFjNWY1YjVjZjc1MTg5Mzc0MzYzZTA3NjEzM2VkZDU4MTQzNmNkMWI1/bvVZw==: 00:28:15.397 15:46:45 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:15.397 15:46:45 -- host/auth.sh@48 -- # echo ffdhe6144 00:28:15.397 15:46:45 -- host/auth.sh@49 -- # echo DHHC-1:02:Mjg4ZmIyZGFjNWY1YjVjZjc1MTg5Mzc0MzYzZTA3NjEzM2VkZDU4MTQzNmNkMWI1/bvVZw==: 00:28:15.397 15:46:45 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 3 00:28:15.397 15:46:45 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:15.397 15:46:45 -- host/auth.sh@68 -- # digest=sha384 00:28:15.397 15:46:45 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:28:15.397 15:46:45 -- host/auth.sh@68 -- # keyid=3 00:28:15.397 15:46:45 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:15.397 15:46:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.397 15:46:45 -- common/autotest_common.sh@10 -- # set +x 00:28:15.397 15:46:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.397 15:46:45 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:15.397 15:46:45 -- nvmf/common.sh@717 -- # local ip 00:28:15.397 15:46:45 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:15.397 15:46:45 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:15.397 15:46:45 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.397 15:46:45 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.397 15:46:45 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:15.397 15:46:45 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.397 15:46:45 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:15.397 15:46:45 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:15.397 15:46:45 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:15.397 15:46:45 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:28:15.397 15:46:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.397 15:46:45 -- common/autotest_common.sh@10 -- # set +x 00:28:15.655 nvme0n1 00:28:15.655 15:46:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.913 15:46:45 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:15.913 15:46:45 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:15.913 15:46:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.913 15:46:45 -- common/autotest_common.sh@10 -- # set +x 00:28:15.913 15:46:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.913 15:46:45 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:15.913 15:46:45 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:15.913 15:46:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.913 15:46:45 -- common/autotest_common.sh@10 -- # set +x 00:28:15.913 15:46:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.913 15:46:46 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:15.913 15:46:46 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:28:15.913 15:46:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:15.913 15:46:46 -- host/auth.sh@44 -- # digest=sha384 00:28:15.913 15:46:46 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:15.913 15:46:46 -- host/auth.sh@44 -- # keyid=4 00:28:15.913 15:46:46 -- host/auth.sh@45 -- # key=DHHC-1:03:YzhlMTZkZDc5MGU2N2VkMDBhODBiNmM5YzdkNjdmNDU2MDcyNjM1YzE2YzE4NzFhNDFmYjM1MGMwODM3MjczMliWbZE=: 00:28:15.913 15:46:46 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:15.913 15:46:46 -- host/auth.sh@48 -- # echo ffdhe6144 00:28:15.913 15:46:46 -- host/auth.sh@49 -- # echo DHHC-1:03:YzhlMTZkZDc5MGU2N2VkMDBhODBiNmM5YzdkNjdmNDU2MDcyNjM1YzE2YzE4NzFhNDFmYjM1MGMwODM3MjczMliWbZE=: 00:28:15.913 15:46:46 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe6144 4 00:28:15.913 15:46:46 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:15.914 15:46:46 -- host/auth.sh@68 -- # digest=sha384 00:28:15.914 15:46:46 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:28:15.914 15:46:46 -- host/auth.sh@68 -- # keyid=4 00:28:15.914 15:46:46 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:15.914 15:46:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.914 15:46:46 -- common/autotest_common.sh@10 -- # set +x 00:28:15.914 15:46:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:15.914 15:46:46 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:15.914 15:46:46 -- nvmf/common.sh@717 -- # local ip 00:28:15.914 15:46:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:15.914 15:46:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:15.914 15:46:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:15.914 15:46:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:15.914 15:46:46 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:15.914 15:46:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:15.914 15:46:46 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:15.914 15:46:46 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:15.914 15:46:46 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:15.914 15:46:46 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:15.914 15:46:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:15.914 15:46:46 -- common/autotest_common.sh@10 -- # set +x 00:28:16.173 nvme0n1 00:28:16.173 15:46:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:16.173 15:46:46 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:16.173 15:46:46 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:16.173 15:46:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:16.173 15:46:46 -- common/autotest_common.sh@10 -- # set +x 00:28:16.173 15:46:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:16.173 15:46:46 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:16.173 15:46:46 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:16.173 15:46:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:16.173 15:46:46 -- common/autotest_common.sh@10 -- # set +x 00:28:16.173 15:46:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:16.173 15:46:46 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:28:16.173 15:46:46 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:16.173 15:46:46 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:28:16.173 15:46:46 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:16.173 15:46:46 -- host/auth.sh@44 -- # digest=sha384 00:28:16.173 15:46:46 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:16.173 15:46:46 -- host/auth.sh@44 -- # keyid=0 00:28:16.173 15:46:46 -- host/auth.sh@45 -- # key=DHHC-1:00:YjlhZjhkOTgyNThkMzQ0ZTAwMGEzNzQ0OTM2ZjY5MmHvLwiC: 00:28:16.173 15:46:46 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:16.173 15:46:46 -- host/auth.sh@48 -- # echo ffdhe8192 00:28:16.173 15:46:46 -- host/auth.sh@49 -- # echo DHHC-1:00:YjlhZjhkOTgyNThkMzQ0ZTAwMGEzNzQ0OTM2ZjY5MmHvLwiC: 00:28:16.173 15:46:46 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 0 00:28:16.173 15:46:46 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:16.173 15:46:46 -- host/auth.sh@68 -- # digest=sha384 00:28:16.173 15:46:46 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:28:16.173 15:46:46 -- host/auth.sh@68 -- # keyid=0 00:28:16.173 15:46:46 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:16.173 15:46:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:16.173 15:46:46 -- common/autotest_common.sh@10 -- # set +x 00:28:16.173 15:46:46 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:16.173 15:46:46 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:16.173 15:46:46 -- nvmf/common.sh@717 -- # local ip 00:28:16.173 15:46:46 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:16.173 15:46:46 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:16.173 15:46:46 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:16.173 15:46:46 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:16.173 15:46:46 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:16.173 15:46:46 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:16.173 15:46:46 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:16.173 15:46:46 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:16.173 15:46:46 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:16.173 15:46:46 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:28:16.173 15:46:46 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:16.173 15:46:46 -- common/autotest_common.sh@10 -- # set +x 00:28:17.121 nvme0n1 00:28:17.121 15:46:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:17.121 15:46:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.121 15:46:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:17.121 15:46:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:17.121 15:46:47 -- common/autotest_common.sh@10 -- # set +x 00:28:17.121 15:46:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:17.121 15:46:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.121 15:46:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.121 15:46:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:17.121 15:46:47 -- common/autotest_common.sh@10 -- # set +x 00:28:17.121 15:46:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:17.121 15:46:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:17.121 15:46:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:28:17.122 15:46:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:17.122 15:46:47 -- host/auth.sh@44 -- # digest=sha384 00:28:17.122 15:46:47 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:17.122 15:46:47 -- host/auth.sh@44 -- # keyid=1 00:28:17.122 15:46:47 -- host/auth.sh@45 -- # key=DHHC-1:00:Y2E1NjE3NGM4ZDZkOTRhOGJiMjc2ZjYzODRkNThhYTExN2RmMzJkYTE5YmM2OTU0UBZ03g==: 00:28:17.122 15:46:47 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:17.122 15:46:47 -- host/auth.sh@48 -- # echo ffdhe8192 00:28:17.122 15:46:47 -- host/auth.sh@49 -- # echo DHHC-1:00:Y2E1NjE3NGM4ZDZkOTRhOGJiMjc2ZjYzODRkNThhYTExN2RmMzJkYTE5YmM2OTU0UBZ03g==: 00:28:17.122 15:46:47 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 1 00:28:17.122 15:46:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:17.122 15:46:47 -- host/auth.sh@68 -- # digest=sha384 00:28:17.122 15:46:47 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:28:17.122 15:46:47 -- host/auth.sh@68 -- # keyid=1 00:28:17.122 15:46:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:17.122 15:46:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:17.122 15:46:47 -- common/autotest_common.sh@10 -- # set +x 00:28:17.122 15:46:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:17.122 15:46:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:17.122 15:46:47 -- nvmf/common.sh@717 -- # local ip 00:28:17.122 15:46:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:17.122 15:46:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:17.122 15:46:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:17.122 15:46:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:17.122 15:46:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:17.122 15:46:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:17.122 15:46:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:17.122 15:46:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:17.122 15:46:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:17.122 15:46:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:28:17.122 15:46:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:17.122 15:46:47 -- common/autotest_common.sh@10 -- # set +x 00:28:17.699 nvme0n1 00:28:17.699 15:46:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:17.699 15:46:47 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:17.699 15:46:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:17.699 15:46:47 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:17.699 15:46:47 -- common/autotest_common.sh@10 -- # set +x 00:28:17.699 15:46:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:17.699 15:46:47 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:17.699 15:46:47 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:17.699 15:46:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:17.699 15:46:47 -- common/autotest_common.sh@10 -- # set +x 00:28:17.699 15:46:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:17.699 15:46:47 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:17.699 15:46:47 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:28:17.699 15:46:47 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:17.699 15:46:47 -- host/auth.sh@44 -- # digest=sha384 00:28:17.699 15:46:47 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:17.699 15:46:47 -- host/auth.sh@44 -- # keyid=2 00:28:17.699 15:46:47 -- host/auth.sh@45 -- # key=DHHC-1:01:ZmE2MzM4MmM5NTJlZWUxMWFhYjFkYjhhNTQxNDhmZDLFP6Rf: 00:28:17.699 15:46:47 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:17.699 15:46:47 -- host/auth.sh@48 -- # echo ffdhe8192 00:28:17.699 15:46:47 -- host/auth.sh@49 -- # echo DHHC-1:01:ZmE2MzM4MmM5NTJlZWUxMWFhYjFkYjhhNTQxNDhmZDLFP6Rf: 00:28:17.699 15:46:47 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 2 00:28:17.699 15:46:47 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:17.699 15:46:47 -- host/auth.sh@68 -- # digest=sha384 00:28:17.699 15:46:47 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:28:17.699 15:46:47 -- host/auth.sh@68 -- # keyid=2 00:28:17.699 15:46:47 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:17.700 15:46:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:17.700 15:46:47 -- common/autotest_common.sh@10 -- # set +x 00:28:17.700 15:46:47 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:17.700 15:46:47 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:17.700 15:46:47 -- nvmf/common.sh@717 -- # local ip 00:28:17.700 15:46:47 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:17.700 15:46:47 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:17.700 15:46:47 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:17.700 15:46:47 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:17.700 15:46:47 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:17.700 15:46:47 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:17.700 15:46:47 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:17.700 15:46:47 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:17.700 15:46:47 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:17.700 15:46:47 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:17.700 15:46:47 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:17.700 15:46:47 -- common/autotest_common.sh@10 -- # set +x 00:28:18.266 nvme0n1 00:28:18.266 15:46:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:18.266 15:46:48 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:18.266 15:46:48 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.266 15:46:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:18.266 15:46:48 -- common/autotest_common.sh@10 -- # set +x 00:28:18.266 15:46:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:18.266 15:46:48 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:18.266 15:46:48 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:18.266 15:46:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:18.266 15:46:48 -- common/autotest_common.sh@10 -- # set +x 00:28:18.266 15:46:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:18.266 15:46:48 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:18.266 15:46:48 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:28:18.266 15:46:48 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:18.266 15:46:48 -- host/auth.sh@44 -- # digest=sha384 00:28:18.266 15:46:48 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:18.266 15:46:48 -- host/auth.sh@44 -- # keyid=3 00:28:18.266 15:46:48 -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg4ZmIyZGFjNWY1YjVjZjc1MTg5Mzc0MzYzZTA3NjEzM2VkZDU4MTQzNmNkMWI1/bvVZw==: 00:28:18.266 15:46:48 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:18.266 15:46:48 -- host/auth.sh@48 -- # echo ffdhe8192 00:28:18.266 15:46:48 -- host/auth.sh@49 -- # echo DHHC-1:02:Mjg4ZmIyZGFjNWY1YjVjZjc1MTg5Mzc0MzYzZTA3NjEzM2VkZDU4MTQzNmNkMWI1/bvVZw==: 00:28:18.266 15:46:48 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 3 00:28:18.266 15:46:48 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:18.266 15:46:48 -- host/auth.sh@68 -- # digest=sha384 00:28:18.266 15:46:48 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:28:18.266 15:46:48 -- host/auth.sh@68 -- # keyid=3 00:28:18.266 15:46:48 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:18.266 15:46:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:18.266 15:46:48 -- common/autotest_common.sh@10 -- # set +x 00:28:18.266 15:46:48 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:18.266 15:46:48 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:18.266 15:46:48 -- nvmf/common.sh@717 -- # local ip 00:28:18.266 15:46:48 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:18.266 15:46:48 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:18.266 15:46:48 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:18.266 15:46:48 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:18.266 15:46:48 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:18.266 15:46:48 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:18.266 15:46:48 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:18.266 15:46:48 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:18.266 15:46:48 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:18.266 15:46:48 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:28:18.266 15:46:48 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:18.266 15:46:48 -- common/autotest_common.sh@10 -- # set +x 00:28:18.832 nvme0n1 00:28:18.832 15:46:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:18.832 15:46:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:18.832 15:46:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:18.832 15:46:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:18.832 15:46:49 -- common/autotest_common.sh@10 -- # set +x 00:28:18.832 15:46:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:19.090 15:46:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.090 15:46:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:19.090 15:46:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:19.090 15:46:49 -- common/autotest_common.sh@10 -- # set +x 00:28:19.090 15:46:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:19.090 15:46:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:19.090 15:46:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:28:19.090 15:46:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:19.090 15:46:49 -- host/auth.sh@44 -- # digest=sha384 00:28:19.090 15:46:49 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:19.090 15:46:49 -- host/auth.sh@44 -- # keyid=4 00:28:19.090 15:46:49 -- host/auth.sh@45 -- # key=DHHC-1:03:YzhlMTZkZDc5MGU2N2VkMDBhODBiNmM5YzdkNjdmNDU2MDcyNjM1YzE2YzE4NzFhNDFmYjM1MGMwODM3MjczMliWbZE=: 00:28:19.090 15:46:49 -- host/auth.sh@47 -- # echo 'hmac(sha384)' 00:28:19.090 15:46:49 -- host/auth.sh@48 -- # echo ffdhe8192 00:28:19.090 15:46:49 -- host/auth.sh@49 -- # echo DHHC-1:03:YzhlMTZkZDc5MGU2N2VkMDBhODBiNmM5YzdkNjdmNDU2MDcyNjM1YzE2YzE4NzFhNDFmYjM1MGMwODM3MjczMliWbZE=: 00:28:19.090 15:46:49 -- host/auth.sh@111 -- # connect_authenticate sha384 ffdhe8192 4 00:28:19.090 15:46:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:19.090 15:46:49 -- host/auth.sh@68 -- # digest=sha384 00:28:19.090 15:46:49 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:28:19.090 15:46:49 -- host/auth.sh@68 -- # keyid=4 00:28:19.090 15:46:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:19.090 15:46:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:19.090 15:46:49 -- common/autotest_common.sh@10 -- # set +x 00:28:19.090 15:46:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:19.090 15:46:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:19.091 15:46:49 -- nvmf/common.sh@717 -- # local ip 00:28:19.091 15:46:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:19.091 15:46:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:19.091 15:46:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.091 15:46:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.091 15:46:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:19.091 15:46:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.091 15:46:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:19.091 15:46:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:19.091 15:46:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:19.091 15:46:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:19.091 15:46:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:19.091 15:46:49 -- common/autotest_common.sh@10 -- # set +x 00:28:19.742 nvme0n1 00:28:19.742 15:46:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:19.742 15:46:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.742 15:46:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:19.742 15:46:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:19.742 15:46:49 -- common/autotest_common.sh@10 -- # set +x 00:28:19.742 15:46:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:19.742 15:46:49 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:19.742 15:46:49 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:19.742 15:46:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:19.742 15:46:49 -- common/autotest_common.sh@10 -- # set +x 00:28:19.742 15:46:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:19.742 15:46:49 -- host/auth.sh@107 -- # for digest in "${digests[@]}" 00:28:19.742 15:46:49 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:28:19.742 15:46:49 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:19.742 15:46:49 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:28:19.742 15:46:49 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:19.742 15:46:49 -- host/auth.sh@44 -- # digest=sha512 00:28:19.742 15:46:49 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:19.742 15:46:49 -- host/auth.sh@44 -- # keyid=0 00:28:19.742 15:46:49 -- host/auth.sh@45 -- # key=DHHC-1:00:YjlhZjhkOTgyNThkMzQ0ZTAwMGEzNzQ0OTM2ZjY5MmHvLwiC: 00:28:19.742 15:46:49 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:19.742 15:46:49 -- host/auth.sh@48 -- # echo ffdhe2048 00:28:19.742 15:46:49 -- host/auth.sh@49 -- # echo DHHC-1:00:YjlhZjhkOTgyNThkMzQ0ZTAwMGEzNzQ0OTM2ZjY5MmHvLwiC: 00:28:19.742 15:46:49 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 0 00:28:19.742 15:46:49 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:19.742 15:46:49 -- host/auth.sh@68 -- # digest=sha512 00:28:19.742 15:46:49 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:28:19.742 15:46:49 -- host/auth.sh@68 -- # keyid=0 00:28:19.742 15:46:49 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:19.742 15:46:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:19.742 15:46:49 -- common/autotest_common.sh@10 -- # set +x 00:28:19.742 15:46:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:19.742 15:46:49 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:19.742 15:46:49 -- nvmf/common.sh@717 -- # local ip 00:28:19.742 15:46:49 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:19.742 15:46:49 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:19.742 15:46:49 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:19.742 15:46:49 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:19.742 15:46:49 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:19.742 15:46:49 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:19.742 15:46:49 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:19.742 15:46:49 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:19.742 15:46:49 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:19.742 15:46:49 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:28:19.742 15:46:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:19.742 15:46:49 -- common/autotest_common.sh@10 -- # set +x 00:28:19.742 nvme0n1 00:28:19.742 15:46:49 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:19.742 15:46:49 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:19.742 15:46:49 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:19.742 15:46:49 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:19.742 15:46:49 -- common/autotest_common.sh@10 -- # set +x 00:28:19.742 15:46:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:20.000 15:46:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:20.000 15:46:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:20.000 15:46:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:20.000 15:46:50 -- common/autotest_common.sh@10 -- # set +x 00:28:20.000 15:46:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:20.000 15:46:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:20.000 15:46:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:28:20.000 15:46:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:20.000 15:46:50 -- host/auth.sh@44 -- # digest=sha512 00:28:20.000 15:46:50 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:20.000 15:46:50 -- host/auth.sh@44 -- # keyid=1 00:28:20.000 15:46:50 -- host/auth.sh@45 -- # key=DHHC-1:00:Y2E1NjE3NGM4ZDZkOTRhOGJiMjc2ZjYzODRkNThhYTExN2RmMzJkYTE5YmM2OTU0UBZ03g==: 00:28:20.000 15:46:50 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:20.000 15:46:50 -- host/auth.sh@48 -- # echo ffdhe2048 00:28:20.000 15:46:50 -- host/auth.sh@49 -- # echo DHHC-1:00:Y2E1NjE3NGM4ZDZkOTRhOGJiMjc2ZjYzODRkNThhYTExN2RmMzJkYTE5YmM2OTU0UBZ03g==: 00:28:20.000 15:46:50 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 1 00:28:20.000 15:46:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:20.000 15:46:50 -- host/auth.sh@68 -- # digest=sha512 00:28:20.000 15:46:50 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:28:20.000 15:46:50 -- host/auth.sh@68 -- # keyid=1 00:28:20.000 15:46:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:20.000 15:46:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:20.000 15:46:50 -- common/autotest_common.sh@10 -- # set +x 00:28:20.000 15:46:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:20.000 15:46:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:20.000 15:46:50 -- nvmf/common.sh@717 -- # local ip 00:28:20.000 15:46:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:20.000 15:46:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:20.000 15:46:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:20.000 15:46:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:20.000 15:46:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:20.000 15:46:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:20.000 15:46:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:20.000 15:46:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:20.000 15:46:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:20.000 15:46:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:28:20.000 15:46:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:20.000 15:46:50 -- common/autotest_common.sh@10 -- # set +x 00:28:20.000 nvme0n1 00:28:20.000 15:46:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:20.000 15:46:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.000 15:46:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:20.000 15:46:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:20.000 15:46:50 -- common/autotest_common.sh@10 -- # set +x 00:28:20.000 15:46:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:20.000 15:46:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:20.000 15:46:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:20.000 15:46:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:20.000 15:46:50 -- common/autotest_common.sh@10 -- # set +x 00:28:20.000 15:46:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:20.000 15:46:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:20.000 15:46:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:28:20.000 15:46:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:20.000 15:46:50 -- host/auth.sh@44 -- # digest=sha512 00:28:20.000 15:46:50 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:20.000 15:46:50 -- host/auth.sh@44 -- # keyid=2 00:28:20.000 15:46:50 -- host/auth.sh@45 -- # key=DHHC-1:01:ZmE2MzM4MmM5NTJlZWUxMWFhYjFkYjhhNTQxNDhmZDLFP6Rf: 00:28:20.000 15:46:50 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:20.000 15:46:50 -- host/auth.sh@48 -- # echo ffdhe2048 00:28:20.000 15:46:50 -- host/auth.sh@49 -- # echo DHHC-1:01:ZmE2MzM4MmM5NTJlZWUxMWFhYjFkYjhhNTQxNDhmZDLFP6Rf: 00:28:20.000 15:46:50 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 2 00:28:20.000 15:46:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:20.000 15:46:50 -- host/auth.sh@68 -- # digest=sha512 00:28:20.000 15:46:50 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:28:20.000 15:46:50 -- host/auth.sh@68 -- # keyid=2 00:28:20.000 15:46:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:20.000 15:46:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:20.000 15:46:50 -- common/autotest_common.sh@10 -- # set +x 00:28:20.000 15:46:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:20.000 15:46:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:20.000 15:46:50 -- nvmf/common.sh@717 -- # local ip 00:28:20.000 15:46:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:20.000 15:46:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:20.000 15:46:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:20.000 15:46:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:20.000 15:46:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:20.000 15:46:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:20.000 15:46:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:20.000 15:46:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:20.000 15:46:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:20.000 15:46:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:20.000 15:46:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:20.000 15:46:50 -- common/autotest_common.sh@10 -- # set +x 00:28:20.259 nvme0n1 00:28:20.259 15:46:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:20.259 15:46:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.259 15:46:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:20.259 15:46:50 -- common/autotest_common.sh@10 -- # set +x 00:28:20.259 15:46:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:20.259 15:46:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:20.259 15:46:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:20.259 15:46:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:20.259 15:46:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:20.259 15:46:50 -- common/autotest_common.sh@10 -- # set +x 00:28:20.259 15:46:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:20.259 15:46:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:20.259 15:46:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:28:20.259 15:46:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:20.259 15:46:50 -- host/auth.sh@44 -- # digest=sha512 00:28:20.259 15:46:50 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:20.259 15:46:50 -- host/auth.sh@44 -- # keyid=3 00:28:20.259 15:46:50 -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg4ZmIyZGFjNWY1YjVjZjc1MTg5Mzc0MzYzZTA3NjEzM2VkZDU4MTQzNmNkMWI1/bvVZw==: 00:28:20.259 15:46:50 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:20.259 15:46:50 -- host/auth.sh@48 -- # echo ffdhe2048 00:28:20.259 15:46:50 -- host/auth.sh@49 -- # echo DHHC-1:02:Mjg4ZmIyZGFjNWY1YjVjZjc1MTg5Mzc0MzYzZTA3NjEzM2VkZDU4MTQzNmNkMWI1/bvVZw==: 00:28:20.259 15:46:50 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 3 00:28:20.259 15:46:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:20.259 15:46:50 -- host/auth.sh@68 -- # digest=sha512 00:28:20.259 15:46:50 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:28:20.259 15:46:50 -- host/auth.sh@68 -- # keyid=3 00:28:20.259 15:46:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:20.259 15:46:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:20.259 15:46:50 -- common/autotest_common.sh@10 -- # set +x 00:28:20.259 15:46:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:20.259 15:46:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:20.259 15:46:50 -- nvmf/common.sh@717 -- # local ip 00:28:20.259 15:46:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:20.259 15:46:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:20.259 15:46:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:20.259 15:46:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:20.259 15:46:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:20.259 15:46:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:20.259 15:46:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:20.259 15:46:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:20.259 15:46:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:20.259 15:46:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:28:20.259 15:46:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:20.259 15:46:50 -- common/autotest_common.sh@10 -- # set +x 00:28:20.259 nvme0n1 00:28:20.259 15:46:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:20.259 15:46:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.259 15:46:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:20.259 15:46:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:20.259 15:46:50 -- common/autotest_common.sh@10 -- # set +x 00:28:20.259 15:46:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:20.518 15:46:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:20.518 15:46:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:20.518 15:46:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:20.518 15:46:50 -- common/autotest_common.sh@10 -- # set +x 00:28:20.518 15:46:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:20.518 15:46:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:20.518 15:46:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:28:20.518 15:46:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:20.518 15:46:50 -- host/auth.sh@44 -- # digest=sha512 00:28:20.518 15:46:50 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:20.518 15:46:50 -- host/auth.sh@44 -- # keyid=4 00:28:20.518 15:46:50 -- host/auth.sh@45 -- # key=DHHC-1:03:YzhlMTZkZDc5MGU2N2VkMDBhODBiNmM5YzdkNjdmNDU2MDcyNjM1YzE2YzE4NzFhNDFmYjM1MGMwODM3MjczMliWbZE=: 00:28:20.518 15:46:50 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:20.518 15:46:50 -- host/auth.sh@48 -- # echo ffdhe2048 00:28:20.518 15:46:50 -- host/auth.sh@49 -- # echo DHHC-1:03:YzhlMTZkZDc5MGU2N2VkMDBhODBiNmM5YzdkNjdmNDU2MDcyNjM1YzE2YzE4NzFhNDFmYjM1MGMwODM3MjczMliWbZE=: 00:28:20.518 15:46:50 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe2048 4 00:28:20.518 15:46:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:20.518 15:46:50 -- host/auth.sh@68 -- # digest=sha512 00:28:20.518 15:46:50 -- host/auth.sh@68 -- # dhgroup=ffdhe2048 00:28:20.518 15:46:50 -- host/auth.sh@68 -- # keyid=4 00:28:20.518 15:46:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:20.518 15:46:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:20.518 15:46:50 -- common/autotest_common.sh@10 -- # set +x 00:28:20.518 15:46:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:20.518 15:46:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:20.518 15:46:50 -- nvmf/common.sh@717 -- # local ip 00:28:20.518 15:46:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:20.518 15:46:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:20.518 15:46:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:20.518 15:46:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:20.518 15:46:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:20.518 15:46:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:20.518 15:46:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:20.518 15:46:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:20.518 15:46:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:20.518 15:46:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:20.518 15:46:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:20.518 15:46:50 -- common/autotest_common.sh@10 -- # set +x 00:28:20.518 nvme0n1 00:28:20.518 15:46:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:20.518 15:46:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.518 15:46:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:20.518 15:46:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:20.518 15:46:50 -- common/autotest_common.sh@10 -- # set +x 00:28:20.518 15:46:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:20.518 15:46:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:20.518 15:46:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:20.518 15:46:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:20.518 15:46:50 -- common/autotest_common.sh@10 -- # set +x 00:28:20.518 15:46:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:20.518 15:46:50 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:28:20.518 15:46:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:20.518 15:46:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:28:20.518 15:46:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:20.518 15:46:50 -- host/auth.sh@44 -- # digest=sha512 00:28:20.518 15:46:50 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:20.518 15:46:50 -- host/auth.sh@44 -- # keyid=0 00:28:20.518 15:46:50 -- host/auth.sh@45 -- # key=DHHC-1:00:YjlhZjhkOTgyNThkMzQ0ZTAwMGEzNzQ0OTM2ZjY5MmHvLwiC: 00:28:20.518 15:46:50 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:20.518 15:46:50 -- host/auth.sh@48 -- # echo ffdhe3072 00:28:20.518 15:46:50 -- host/auth.sh@49 -- # echo DHHC-1:00:YjlhZjhkOTgyNThkMzQ0ZTAwMGEzNzQ0OTM2ZjY5MmHvLwiC: 00:28:20.518 15:46:50 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 0 00:28:20.518 15:46:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:20.518 15:46:50 -- host/auth.sh@68 -- # digest=sha512 00:28:20.519 15:46:50 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:28:20.519 15:46:50 -- host/auth.sh@68 -- # keyid=0 00:28:20.519 15:46:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:20.519 15:46:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:20.519 15:46:50 -- common/autotest_common.sh@10 -- # set +x 00:28:20.519 15:46:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:20.519 15:46:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:20.519 15:46:50 -- nvmf/common.sh@717 -- # local ip 00:28:20.519 15:46:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:20.519 15:46:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:20.519 15:46:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:20.519 15:46:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:20.519 15:46:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:20.519 15:46:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:20.519 15:46:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:20.519 15:46:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:20.519 15:46:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:20.519 15:46:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:28:20.519 15:46:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:20.519 15:46:50 -- common/autotest_common.sh@10 -- # set +x 00:28:20.778 nvme0n1 00:28:20.778 15:46:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:20.778 15:46:50 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:20.778 15:46:50 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:20.778 15:46:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:20.778 15:46:50 -- common/autotest_common.sh@10 -- # set +x 00:28:20.778 15:46:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:20.778 15:46:50 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:20.778 15:46:50 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:20.778 15:46:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:20.778 15:46:50 -- common/autotest_common.sh@10 -- # set +x 00:28:20.778 15:46:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:20.778 15:46:50 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:20.778 15:46:50 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:28:20.778 15:46:50 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:20.778 15:46:50 -- host/auth.sh@44 -- # digest=sha512 00:28:20.778 15:46:50 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:20.778 15:46:50 -- host/auth.sh@44 -- # keyid=1 00:28:20.778 15:46:50 -- host/auth.sh@45 -- # key=DHHC-1:00:Y2E1NjE3NGM4ZDZkOTRhOGJiMjc2ZjYzODRkNThhYTExN2RmMzJkYTE5YmM2OTU0UBZ03g==: 00:28:20.778 15:46:50 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:20.778 15:46:50 -- host/auth.sh@48 -- # echo ffdhe3072 00:28:20.778 15:46:50 -- host/auth.sh@49 -- # echo DHHC-1:00:Y2E1NjE3NGM4ZDZkOTRhOGJiMjc2ZjYzODRkNThhYTExN2RmMzJkYTE5YmM2OTU0UBZ03g==: 00:28:20.778 15:46:50 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 1 00:28:20.778 15:46:50 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:20.778 15:46:50 -- host/auth.sh@68 -- # digest=sha512 00:28:20.778 15:46:50 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:28:20.778 15:46:50 -- host/auth.sh@68 -- # keyid=1 00:28:20.778 15:46:50 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:20.778 15:46:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:20.778 15:46:50 -- common/autotest_common.sh@10 -- # set +x 00:28:20.778 15:46:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:20.778 15:46:50 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:20.778 15:46:50 -- nvmf/common.sh@717 -- # local ip 00:28:20.778 15:46:50 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:20.778 15:46:50 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:20.778 15:46:50 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:20.778 15:46:50 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:20.778 15:46:50 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:20.778 15:46:50 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:20.778 15:46:50 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:20.778 15:46:50 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:20.778 15:46:50 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:20.778 15:46:50 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:28:20.778 15:46:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:20.778 15:46:50 -- common/autotest_common.sh@10 -- # set +x 00:28:21.037 nvme0n1 00:28:21.037 15:46:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:21.037 15:46:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:21.037 15:46:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:21.037 15:46:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:21.037 15:46:51 -- common/autotest_common.sh@10 -- # set +x 00:28:21.037 15:46:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:21.037 15:46:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:21.037 15:46:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:21.037 15:46:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:21.037 15:46:51 -- common/autotest_common.sh@10 -- # set +x 00:28:21.037 15:46:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:21.037 15:46:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:21.037 15:46:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:28:21.037 15:46:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:21.037 15:46:51 -- host/auth.sh@44 -- # digest=sha512 00:28:21.037 15:46:51 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:21.037 15:46:51 -- host/auth.sh@44 -- # keyid=2 00:28:21.037 15:46:51 -- host/auth.sh@45 -- # key=DHHC-1:01:ZmE2MzM4MmM5NTJlZWUxMWFhYjFkYjhhNTQxNDhmZDLFP6Rf: 00:28:21.037 15:46:51 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:21.037 15:46:51 -- host/auth.sh@48 -- # echo ffdhe3072 00:28:21.037 15:46:51 -- host/auth.sh@49 -- # echo DHHC-1:01:ZmE2MzM4MmM5NTJlZWUxMWFhYjFkYjhhNTQxNDhmZDLFP6Rf: 00:28:21.037 15:46:51 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 2 00:28:21.037 15:46:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:21.037 15:46:51 -- host/auth.sh@68 -- # digest=sha512 00:28:21.037 15:46:51 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:28:21.037 15:46:51 -- host/auth.sh@68 -- # keyid=2 00:28:21.037 15:46:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:21.037 15:46:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:21.037 15:46:51 -- common/autotest_common.sh@10 -- # set +x 00:28:21.037 15:46:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:21.037 15:46:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:21.037 15:46:51 -- nvmf/common.sh@717 -- # local ip 00:28:21.037 15:46:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:21.037 15:46:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:21.037 15:46:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:21.037 15:46:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:21.037 15:46:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:21.037 15:46:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:21.037 15:46:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:21.037 15:46:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:21.037 15:46:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:21.037 15:46:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:21.037 15:46:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:21.037 15:46:51 -- common/autotest_common.sh@10 -- # set +x 00:28:21.037 nvme0n1 00:28:21.037 15:46:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:21.037 15:46:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:21.037 15:46:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:21.037 15:46:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:21.037 15:46:51 -- common/autotest_common.sh@10 -- # set +x 00:28:21.037 15:46:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:21.296 15:46:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:21.296 15:46:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:21.296 15:46:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:21.296 15:46:51 -- common/autotest_common.sh@10 -- # set +x 00:28:21.296 15:46:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:21.296 15:46:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:21.296 15:46:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:28:21.296 15:46:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:21.296 15:46:51 -- host/auth.sh@44 -- # digest=sha512 00:28:21.296 15:46:51 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:21.296 15:46:51 -- host/auth.sh@44 -- # keyid=3 00:28:21.296 15:46:51 -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg4ZmIyZGFjNWY1YjVjZjc1MTg5Mzc0MzYzZTA3NjEzM2VkZDU4MTQzNmNkMWI1/bvVZw==: 00:28:21.296 15:46:51 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:21.296 15:46:51 -- host/auth.sh@48 -- # echo ffdhe3072 00:28:21.296 15:46:51 -- host/auth.sh@49 -- # echo DHHC-1:02:Mjg4ZmIyZGFjNWY1YjVjZjc1MTg5Mzc0MzYzZTA3NjEzM2VkZDU4MTQzNmNkMWI1/bvVZw==: 00:28:21.296 15:46:51 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 3 00:28:21.296 15:46:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:21.296 15:46:51 -- host/auth.sh@68 -- # digest=sha512 00:28:21.296 15:46:51 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:28:21.296 15:46:51 -- host/auth.sh@68 -- # keyid=3 00:28:21.296 15:46:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:21.296 15:46:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:21.296 15:46:51 -- common/autotest_common.sh@10 -- # set +x 00:28:21.296 15:46:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:21.296 15:46:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:21.296 15:46:51 -- nvmf/common.sh@717 -- # local ip 00:28:21.296 15:46:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:21.296 15:46:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:21.296 15:46:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:21.296 15:46:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:21.296 15:46:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:21.296 15:46:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:21.296 15:46:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:21.296 15:46:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:21.296 15:46:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:21.296 15:46:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:28:21.296 15:46:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:21.296 15:46:51 -- common/autotest_common.sh@10 -- # set +x 00:28:21.296 nvme0n1 00:28:21.296 15:46:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:21.296 15:46:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:21.296 15:46:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:21.296 15:46:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:21.296 15:46:51 -- common/autotest_common.sh@10 -- # set +x 00:28:21.296 15:46:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:21.296 15:46:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:21.296 15:46:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:21.296 15:46:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:21.296 15:46:51 -- common/autotest_common.sh@10 -- # set +x 00:28:21.296 15:46:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:21.296 15:46:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:21.296 15:46:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:28:21.296 15:46:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:21.296 15:46:51 -- host/auth.sh@44 -- # digest=sha512 00:28:21.296 15:46:51 -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:21.296 15:46:51 -- host/auth.sh@44 -- # keyid=4 00:28:21.296 15:46:51 -- host/auth.sh@45 -- # key=DHHC-1:03:YzhlMTZkZDc5MGU2N2VkMDBhODBiNmM5YzdkNjdmNDU2MDcyNjM1YzE2YzE4NzFhNDFmYjM1MGMwODM3MjczMliWbZE=: 00:28:21.296 15:46:51 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:21.296 15:46:51 -- host/auth.sh@48 -- # echo ffdhe3072 00:28:21.296 15:46:51 -- host/auth.sh@49 -- # echo DHHC-1:03:YzhlMTZkZDc5MGU2N2VkMDBhODBiNmM5YzdkNjdmNDU2MDcyNjM1YzE2YzE4NzFhNDFmYjM1MGMwODM3MjczMliWbZE=: 00:28:21.296 15:46:51 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe3072 4 00:28:21.296 15:46:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:21.296 15:46:51 -- host/auth.sh@68 -- # digest=sha512 00:28:21.296 15:46:51 -- host/auth.sh@68 -- # dhgroup=ffdhe3072 00:28:21.296 15:46:51 -- host/auth.sh@68 -- # keyid=4 00:28:21.296 15:46:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:21.296 15:46:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:21.296 15:46:51 -- common/autotest_common.sh@10 -- # set +x 00:28:21.296 15:46:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:21.296 15:46:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:21.296 15:46:51 -- nvmf/common.sh@717 -- # local ip 00:28:21.296 15:46:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:21.296 15:46:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:21.296 15:46:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:21.296 15:46:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:21.296 15:46:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:21.296 15:46:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:21.296 15:46:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:21.297 15:46:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:21.297 15:46:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:21.297 15:46:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:21.297 15:46:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:21.297 15:46:51 -- common/autotest_common.sh@10 -- # set +x 00:28:21.555 nvme0n1 00:28:21.555 15:46:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:21.555 15:46:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:21.555 15:46:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:21.555 15:46:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:21.555 15:46:51 -- common/autotest_common.sh@10 -- # set +x 00:28:21.555 15:46:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:21.555 15:46:51 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:21.555 15:46:51 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:21.555 15:46:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:21.555 15:46:51 -- common/autotest_common.sh@10 -- # set +x 00:28:21.555 15:46:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:21.555 15:46:51 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:28:21.555 15:46:51 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:21.555 15:46:51 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:28:21.555 15:46:51 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:21.555 15:46:51 -- host/auth.sh@44 -- # digest=sha512 00:28:21.555 15:46:51 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:21.555 15:46:51 -- host/auth.sh@44 -- # keyid=0 00:28:21.555 15:46:51 -- host/auth.sh@45 -- # key=DHHC-1:00:YjlhZjhkOTgyNThkMzQ0ZTAwMGEzNzQ0OTM2ZjY5MmHvLwiC: 00:28:21.555 15:46:51 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:21.555 15:46:51 -- host/auth.sh@48 -- # echo ffdhe4096 00:28:21.555 15:46:51 -- host/auth.sh@49 -- # echo DHHC-1:00:YjlhZjhkOTgyNThkMzQ0ZTAwMGEzNzQ0OTM2ZjY5MmHvLwiC: 00:28:21.555 15:46:51 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 0 00:28:21.555 15:46:51 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:21.555 15:46:51 -- host/auth.sh@68 -- # digest=sha512 00:28:21.555 15:46:51 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:28:21.555 15:46:51 -- host/auth.sh@68 -- # keyid=0 00:28:21.555 15:46:51 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:21.555 15:46:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:21.555 15:46:51 -- common/autotest_common.sh@10 -- # set +x 00:28:21.555 15:46:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:21.555 15:46:51 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:21.555 15:46:51 -- nvmf/common.sh@717 -- # local ip 00:28:21.555 15:46:51 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:21.555 15:46:51 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:21.555 15:46:51 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:21.555 15:46:51 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:21.555 15:46:51 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:21.555 15:46:51 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:21.555 15:46:51 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:21.555 15:46:51 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:21.555 15:46:51 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:21.555 15:46:51 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:28:21.555 15:46:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:21.555 15:46:51 -- common/autotest_common.sh@10 -- # set +x 00:28:21.814 nvme0n1 00:28:21.814 15:46:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:21.814 15:46:51 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:21.814 15:46:51 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:21.814 15:46:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:21.814 15:46:51 -- common/autotest_common.sh@10 -- # set +x 00:28:21.814 15:46:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:21.814 15:46:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:21.814 15:46:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:21.814 15:46:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:21.814 15:46:52 -- common/autotest_common.sh@10 -- # set +x 00:28:21.814 15:46:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:21.814 15:46:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:21.814 15:46:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:28:21.814 15:46:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:21.814 15:46:52 -- host/auth.sh@44 -- # digest=sha512 00:28:21.814 15:46:52 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:21.814 15:46:52 -- host/auth.sh@44 -- # keyid=1 00:28:21.814 15:46:52 -- host/auth.sh@45 -- # key=DHHC-1:00:Y2E1NjE3NGM4ZDZkOTRhOGJiMjc2ZjYzODRkNThhYTExN2RmMzJkYTE5YmM2OTU0UBZ03g==: 00:28:21.814 15:46:52 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:21.814 15:46:52 -- host/auth.sh@48 -- # echo ffdhe4096 00:28:21.814 15:46:52 -- host/auth.sh@49 -- # echo DHHC-1:00:Y2E1NjE3NGM4ZDZkOTRhOGJiMjc2ZjYzODRkNThhYTExN2RmMzJkYTE5YmM2OTU0UBZ03g==: 00:28:21.814 15:46:52 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 1 00:28:21.814 15:46:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:21.814 15:46:52 -- host/auth.sh@68 -- # digest=sha512 00:28:21.814 15:46:52 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:28:21.814 15:46:52 -- host/auth.sh@68 -- # keyid=1 00:28:21.814 15:46:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:21.814 15:46:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:21.814 15:46:52 -- common/autotest_common.sh@10 -- # set +x 00:28:21.814 15:46:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:21.814 15:46:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:21.814 15:46:52 -- nvmf/common.sh@717 -- # local ip 00:28:21.814 15:46:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:21.814 15:46:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:21.814 15:46:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:21.814 15:46:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:21.814 15:46:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:21.814 15:46:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:21.814 15:46:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:21.814 15:46:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:21.814 15:46:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:21.814 15:46:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:28:21.814 15:46:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:21.814 15:46:52 -- common/autotest_common.sh@10 -- # set +x 00:28:22.073 nvme0n1 00:28:22.073 15:46:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:22.073 15:46:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:22.073 15:46:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:22.073 15:46:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:22.073 15:46:52 -- common/autotest_common.sh@10 -- # set +x 00:28:22.073 15:46:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:22.073 15:46:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:22.073 15:46:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:22.073 15:46:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:22.073 15:46:52 -- common/autotest_common.sh@10 -- # set +x 00:28:22.073 15:46:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:22.073 15:46:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:22.073 15:46:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:28:22.073 15:46:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:22.073 15:46:52 -- host/auth.sh@44 -- # digest=sha512 00:28:22.073 15:46:52 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:22.073 15:46:52 -- host/auth.sh@44 -- # keyid=2 00:28:22.073 15:46:52 -- host/auth.sh@45 -- # key=DHHC-1:01:ZmE2MzM4MmM5NTJlZWUxMWFhYjFkYjhhNTQxNDhmZDLFP6Rf: 00:28:22.073 15:46:52 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:22.073 15:46:52 -- host/auth.sh@48 -- # echo ffdhe4096 00:28:22.073 15:46:52 -- host/auth.sh@49 -- # echo DHHC-1:01:ZmE2MzM4MmM5NTJlZWUxMWFhYjFkYjhhNTQxNDhmZDLFP6Rf: 00:28:22.073 15:46:52 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 2 00:28:22.073 15:46:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:22.073 15:46:52 -- host/auth.sh@68 -- # digest=sha512 00:28:22.073 15:46:52 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:28:22.073 15:46:52 -- host/auth.sh@68 -- # keyid=2 00:28:22.073 15:46:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:22.073 15:46:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:22.073 15:46:52 -- common/autotest_common.sh@10 -- # set +x 00:28:22.073 15:46:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:22.073 15:46:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:22.073 15:46:52 -- nvmf/common.sh@717 -- # local ip 00:28:22.073 15:46:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:22.073 15:46:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:22.073 15:46:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:22.073 15:46:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:22.073 15:46:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:22.073 15:46:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:22.073 15:46:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:22.073 15:46:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:22.073 15:46:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:22.073 15:46:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:22.073 15:46:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:22.073 15:46:52 -- common/autotest_common.sh@10 -- # set +x 00:28:22.331 nvme0n1 00:28:22.331 15:46:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:22.331 15:46:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:22.331 15:46:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:22.331 15:46:52 -- common/autotest_common.sh@10 -- # set +x 00:28:22.331 15:46:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:22.331 15:46:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:22.331 15:46:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:22.331 15:46:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:22.331 15:46:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:22.331 15:46:52 -- common/autotest_common.sh@10 -- # set +x 00:28:22.331 15:46:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:22.331 15:46:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:22.331 15:46:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:28:22.331 15:46:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:22.331 15:46:52 -- host/auth.sh@44 -- # digest=sha512 00:28:22.331 15:46:52 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:22.331 15:46:52 -- host/auth.sh@44 -- # keyid=3 00:28:22.331 15:46:52 -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg4ZmIyZGFjNWY1YjVjZjc1MTg5Mzc0MzYzZTA3NjEzM2VkZDU4MTQzNmNkMWI1/bvVZw==: 00:28:22.331 15:46:52 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:22.331 15:46:52 -- host/auth.sh@48 -- # echo ffdhe4096 00:28:22.331 15:46:52 -- host/auth.sh@49 -- # echo DHHC-1:02:Mjg4ZmIyZGFjNWY1YjVjZjc1MTg5Mzc0MzYzZTA3NjEzM2VkZDU4MTQzNmNkMWI1/bvVZw==: 00:28:22.331 15:46:52 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 3 00:28:22.331 15:46:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:22.331 15:46:52 -- host/auth.sh@68 -- # digest=sha512 00:28:22.331 15:46:52 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:28:22.331 15:46:52 -- host/auth.sh@68 -- # keyid=3 00:28:22.331 15:46:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:22.331 15:46:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:22.331 15:46:52 -- common/autotest_common.sh@10 -- # set +x 00:28:22.331 15:46:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:22.331 15:46:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:22.331 15:46:52 -- nvmf/common.sh@717 -- # local ip 00:28:22.331 15:46:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:22.331 15:46:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:22.331 15:46:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:22.331 15:46:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:22.331 15:46:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:22.331 15:46:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:22.331 15:46:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:22.331 15:46:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:22.331 15:46:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:22.331 15:46:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:28:22.331 15:46:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:22.331 15:46:52 -- common/autotest_common.sh@10 -- # set +x 00:28:22.589 nvme0n1 00:28:22.589 15:46:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:22.589 15:46:52 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:22.589 15:46:52 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:22.589 15:46:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:22.589 15:46:52 -- common/autotest_common.sh@10 -- # set +x 00:28:22.589 15:46:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:22.589 15:46:52 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:22.589 15:46:52 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:22.589 15:46:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:22.589 15:46:52 -- common/autotest_common.sh@10 -- # set +x 00:28:22.589 15:46:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:22.589 15:46:52 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:22.589 15:46:52 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:28:22.589 15:46:52 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:22.589 15:46:52 -- host/auth.sh@44 -- # digest=sha512 00:28:22.589 15:46:52 -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:22.589 15:46:52 -- host/auth.sh@44 -- # keyid=4 00:28:22.589 15:46:52 -- host/auth.sh@45 -- # key=DHHC-1:03:YzhlMTZkZDc5MGU2N2VkMDBhODBiNmM5YzdkNjdmNDU2MDcyNjM1YzE2YzE4NzFhNDFmYjM1MGMwODM3MjczMliWbZE=: 00:28:22.589 15:46:52 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:22.589 15:46:52 -- host/auth.sh@48 -- # echo ffdhe4096 00:28:22.589 15:46:52 -- host/auth.sh@49 -- # echo DHHC-1:03:YzhlMTZkZDc5MGU2N2VkMDBhODBiNmM5YzdkNjdmNDU2MDcyNjM1YzE2YzE4NzFhNDFmYjM1MGMwODM3MjczMliWbZE=: 00:28:22.589 15:46:52 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe4096 4 00:28:22.589 15:46:52 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:22.589 15:46:52 -- host/auth.sh@68 -- # digest=sha512 00:28:22.589 15:46:52 -- host/auth.sh@68 -- # dhgroup=ffdhe4096 00:28:22.589 15:46:52 -- host/auth.sh@68 -- # keyid=4 00:28:22.589 15:46:52 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:22.589 15:46:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:22.589 15:46:52 -- common/autotest_common.sh@10 -- # set +x 00:28:22.589 15:46:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:22.589 15:46:52 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:22.589 15:46:52 -- nvmf/common.sh@717 -- # local ip 00:28:22.589 15:46:52 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:22.589 15:46:52 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:22.589 15:46:52 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:22.589 15:46:52 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:22.589 15:46:52 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:22.589 15:46:52 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:22.589 15:46:52 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:22.589 15:46:52 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:22.589 15:46:52 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:22.590 15:46:52 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:22.590 15:46:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:22.590 15:46:52 -- common/autotest_common.sh@10 -- # set +x 00:28:22.848 nvme0n1 00:28:22.848 15:46:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:22.848 15:46:53 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:22.848 15:46:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:22.848 15:46:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:22.848 15:46:53 -- common/autotest_common.sh@10 -- # set +x 00:28:22.848 15:46:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:22.848 15:46:53 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:22.848 15:46:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:22.848 15:46:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:22.848 15:46:53 -- common/autotest_common.sh@10 -- # set +x 00:28:22.848 15:46:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:22.848 15:46:53 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:28:22.848 15:46:53 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:22.848 15:46:53 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:28:22.848 15:46:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:22.848 15:46:53 -- host/auth.sh@44 -- # digest=sha512 00:28:22.848 15:46:53 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:22.848 15:46:53 -- host/auth.sh@44 -- # keyid=0 00:28:22.848 15:46:53 -- host/auth.sh@45 -- # key=DHHC-1:00:YjlhZjhkOTgyNThkMzQ0ZTAwMGEzNzQ0OTM2ZjY5MmHvLwiC: 00:28:22.848 15:46:53 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:22.848 15:46:53 -- host/auth.sh@48 -- # echo ffdhe6144 00:28:22.848 15:46:53 -- host/auth.sh@49 -- # echo DHHC-1:00:YjlhZjhkOTgyNThkMzQ0ZTAwMGEzNzQ0OTM2ZjY5MmHvLwiC: 00:28:22.848 15:46:53 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 0 00:28:22.848 15:46:53 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:22.848 15:46:53 -- host/auth.sh@68 -- # digest=sha512 00:28:22.848 15:46:53 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:28:22.848 15:46:53 -- host/auth.sh@68 -- # keyid=0 00:28:22.848 15:46:53 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:22.848 15:46:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:22.848 15:46:53 -- common/autotest_common.sh@10 -- # set +x 00:28:22.849 15:46:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:22.849 15:46:53 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:22.849 15:46:53 -- nvmf/common.sh@717 -- # local ip 00:28:22.849 15:46:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:22.849 15:46:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:22.849 15:46:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:22.849 15:46:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:22.849 15:46:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:22.849 15:46:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:22.849 15:46:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:22.849 15:46:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:22.849 15:46:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:22.849 15:46:53 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:28:22.849 15:46:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:22.849 15:46:53 -- common/autotest_common.sh@10 -- # set +x 00:28:23.420 nvme0n1 00:28:23.420 15:46:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:23.420 15:46:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:23.420 15:46:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:23.420 15:46:53 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:23.420 15:46:53 -- common/autotest_common.sh@10 -- # set +x 00:28:23.420 15:46:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:23.420 15:46:53 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:23.420 15:46:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:23.420 15:46:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:23.420 15:46:53 -- common/autotest_common.sh@10 -- # set +x 00:28:23.420 15:46:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:23.420 15:46:53 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:23.420 15:46:53 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:28:23.420 15:46:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:23.420 15:46:53 -- host/auth.sh@44 -- # digest=sha512 00:28:23.420 15:46:53 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:23.420 15:46:53 -- host/auth.sh@44 -- # keyid=1 00:28:23.420 15:46:53 -- host/auth.sh@45 -- # key=DHHC-1:00:Y2E1NjE3NGM4ZDZkOTRhOGJiMjc2ZjYzODRkNThhYTExN2RmMzJkYTE5YmM2OTU0UBZ03g==: 00:28:23.420 15:46:53 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:23.420 15:46:53 -- host/auth.sh@48 -- # echo ffdhe6144 00:28:23.420 15:46:53 -- host/auth.sh@49 -- # echo DHHC-1:00:Y2E1NjE3NGM4ZDZkOTRhOGJiMjc2ZjYzODRkNThhYTExN2RmMzJkYTE5YmM2OTU0UBZ03g==: 00:28:23.420 15:46:53 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 1 00:28:23.420 15:46:53 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:23.420 15:46:53 -- host/auth.sh@68 -- # digest=sha512 00:28:23.420 15:46:53 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:28:23.420 15:46:53 -- host/auth.sh@68 -- # keyid=1 00:28:23.421 15:46:53 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:23.421 15:46:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:23.421 15:46:53 -- common/autotest_common.sh@10 -- # set +x 00:28:23.421 15:46:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:23.421 15:46:53 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:23.421 15:46:53 -- nvmf/common.sh@717 -- # local ip 00:28:23.421 15:46:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:23.421 15:46:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:23.421 15:46:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:23.421 15:46:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:23.421 15:46:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:23.421 15:46:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:23.421 15:46:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:23.421 15:46:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:23.421 15:46:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:23.421 15:46:53 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:28:23.421 15:46:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:23.421 15:46:53 -- common/autotest_common.sh@10 -- # set +x 00:28:23.679 nvme0n1 00:28:23.679 15:46:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:23.679 15:46:53 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:23.679 15:46:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:23.679 15:46:53 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:23.679 15:46:53 -- common/autotest_common.sh@10 -- # set +x 00:28:23.679 15:46:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:23.679 15:46:53 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:23.679 15:46:53 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:23.679 15:46:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:23.679 15:46:53 -- common/autotest_common.sh@10 -- # set +x 00:28:23.679 15:46:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:23.679 15:46:53 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:23.679 15:46:53 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:28:23.679 15:46:53 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:23.679 15:46:53 -- host/auth.sh@44 -- # digest=sha512 00:28:23.679 15:46:53 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:23.679 15:46:53 -- host/auth.sh@44 -- # keyid=2 00:28:23.679 15:46:53 -- host/auth.sh@45 -- # key=DHHC-1:01:ZmE2MzM4MmM5NTJlZWUxMWFhYjFkYjhhNTQxNDhmZDLFP6Rf: 00:28:23.679 15:46:53 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:23.679 15:46:53 -- host/auth.sh@48 -- # echo ffdhe6144 00:28:23.679 15:46:53 -- host/auth.sh@49 -- # echo DHHC-1:01:ZmE2MzM4MmM5NTJlZWUxMWFhYjFkYjhhNTQxNDhmZDLFP6Rf: 00:28:23.680 15:46:53 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 2 00:28:23.680 15:46:53 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:23.680 15:46:53 -- host/auth.sh@68 -- # digest=sha512 00:28:23.680 15:46:53 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:28:23.680 15:46:53 -- host/auth.sh@68 -- # keyid=2 00:28:23.680 15:46:53 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:23.680 15:46:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:23.680 15:46:53 -- common/autotest_common.sh@10 -- # set +x 00:28:23.680 15:46:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:23.680 15:46:53 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:23.680 15:46:53 -- nvmf/common.sh@717 -- # local ip 00:28:23.680 15:46:53 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:23.680 15:46:53 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:23.680 15:46:53 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:23.680 15:46:53 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:23.680 15:46:53 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:23.680 15:46:53 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:23.680 15:46:53 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:23.680 15:46:53 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:23.680 15:46:53 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:23.680 15:46:53 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:23.680 15:46:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:23.680 15:46:53 -- common/autotest_common.sh@10 -- # set +x 00:28:24.245 nvme0n1 00:28:24.245 15:46:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:24.245 15:46:54 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:24.245 15:46:54 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:24.245 15:46:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:24.245 15:46:54 -- common/autotest_common.sh@10 -- # set +x 00:28:24.245 15:46:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:24.245 15:46:54 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:24.245 15:46:54 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:24.245 15:46:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:24.245 15:46:54 -- common/autotest_common.sh@10 -- # set +x 00:28:24.245 15:46:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:24.245 15:46:54 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:24.245 15:46:54 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:28:24.245 15:46:54 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:24.245 15:46:54 -- host/auth.sh@44 -- # digest=sha512 00:28:24.245 15:46:54 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:24.245 15:46:54 -- host/auth.sh@44 -- # keyid=3 00:28:24.245 15:46:54 -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg4ZmIyZGFjNWY1YjVjZjc1MTg5Mzc0MzYzZTA3NjEzM2VkZDU4MTQzNmNkMWI1/bvVZw==: 00:28:24.245 15:46:54 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:24.245 15:46:54 -- host/auth.sh@48 -- # echo ffdhe6144 00:28:24.245 15:46:54 -- host/auth.sh@49 -- # echo DHHC-1:02:Mjg4ZmIyZGFjNWY1YjVjZjc1MTg5Mzc0MzYzZTA3NjEzM2VkZDU4MTQzNmNkMWI1/bvVZw==: 00:28:24.245 15:46:54 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 3 00:28:24.245 15:46:54 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:24.245 15:46:54 -- host/auth.sh@68 -- # digest=sha512 00:28:24.245 15:46:54 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:28:24.245 15:46:54 -- host/auth.sh@68 -- # keyid=3 00:28:24.245 15:46:54 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:24.245 15:46:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:24.245 15:46:54 -- common/autotest_common.sh@10 -- # set +x 00:28:24.245 15:46:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:24.245 15:46:54 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:24.245 15:46:54 -- nvmf/common.sh@717 -- # local ip 00:28:24.245 15:46:54 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:24.245 15:46:54 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:24.245 15:46:54 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:24.245 15:46:54 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:24.245 15:46:54 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:24.245 15:46:54 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:24.245 15:46:54 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:24.245 15:46:54 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:24.245 15:46:54 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:24.245 15:46:54 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:28:24.245 15:46:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:24.245 15:46:54 -- common/autotest_common.sh@10 -- # set +x 00:28:24.516 nvme0n1 00:28:24.516 15:46:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:24.516 15:46:54 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:24.516 15:46:54 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:24.516 15:46:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:24.516 15:46:54 -- common/autotest_common.sh@10 -- # set +x 00:28:24.516 15:46:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:24.785 15:46:54 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:24.785 15:46:54 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:24.785 15:46:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:24.785 15:46:54 -- common/autotest_common.sh@10 -- # set +x 00:28:24.785 15:46:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:24.785 15:46:54 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:24.785 15:46:54 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:28:24.785 15:46:54 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:24.785 15:46:54 -- host/auth.sh@44 -- # digest=sha512 00:28:24.785 15:46:54 -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:24.785 15:46:54 -- host/auth.sh@44 -- # keyid=4 00:28:24.785 15:46:54 -- host/auth.sh@45 -- # key=DHHC-1:03:YzhlMTZkZDc5MGU2N2VkMDBhODBiNmM5YzdkNjdmNDU2MDcyNjM1YzE2YzE4NzFhNDFmYjM1MGMwODM3MjczMliWbZE=: 00:28:24.785 15:46:54 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:24.785 15:46:54 -- host/auth.sh@48 -- # echo ffdhe6144 00:28:24.785 15:46:54 -- host/auth.sh@49 -- # echo DHHC-1:03:YzhlMTZkZDc5MGU2N2VkMDBhODBiNmM5YzdkNjdmNDU2MDcyNjM1YzE2YzE4NzFhNDFmYjM1MGMwODM3MjczMliWbZE=: 00:28:24.785 15:46:54 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe6144 4 00:28:24.785 15:46:54 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:24.785 15:46:54 -- host/auth.sh@68 -- # digest=sha512 00:28:24.785 15:46:54 -- host/auth.sh@68 -- # dhgroup=ffdhe6144 00:28:24.785 15:46:54 -- host/auth.sh@68 -- # keyid=4 00:28:24.785 15:46:54 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:24.785 15:46:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:24.785 15:46:54 -- common/autotest_common.sh@10 -- # set +x 00:28:24.785 15:46:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:24.785 15:46:54 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:24.785 15:46:54 -- nvmf/common.sh@717 -- # local ip 00:28:24.785 15:46:54 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:24.785 15:46:54 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:24.785 15:46:54 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:24.785 15:46:54 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:24.785 15:46:54 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:24.785 15:46:54 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:24.785 15:46:54 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:24.785 15:46:54 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:24.785 15:46:54 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:24.785 15:46:54 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:24.785 15:46:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:24.785 15:46:54 -- common/autotest_common.sh@10 -- # set +x 00:28:25.058 nvme0n1 00:28:25.058 15:46:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:25.058 15:46:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:25.059 15:46:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:25.059 15:46:55 -- common/autotest_common.sh@10 -- # set +x 00:28:25.059 15:46:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:25.059 15:46:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:25.059 15:46:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:25.059 15:46:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:25.059 15:46:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:25.059 15:46:55 -- common/autotest_common.sh@10 -- # set +x 00:28:25.059 15:46:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:25.059 15:46:55 -- host/auth.sh@108 -- # for dhgroup in "${dhgroups[@]}" 00:28:25.059 15:46:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:25.059 15:46:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:28:25.059 15:46:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:25.059 15:46:55 -- host/auth.sh@44 -- # digest=sha512 00:28:25.059 15:46:55 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:25.059 15:46:55 -- host/auth.sh@44 -- # keyid=0 00:28:25.059 15:46:55 -- host/auth.sh@45 -- # key=DHHC-1:00:YjlhZjhkOTgyNThkMzQ0ZTAwMGEzNzQ0OTM2ZjY5MmHvLwiC: 00:28:25.059 15:46:55 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:25.059 15:46:55 -- host/auth.sh@48 -- # echo ffdhe8192 00:28:25.059 15:46:55 -- host/auth.sh@49 -- # echo DHHC-1:00:YjlhZjhkOTgyNThkMzQ0ZTAwMGEzNzQ0OTM2ZjY5MmHvLwiC: 00:28:25.059 15:46:55 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 0 00:28:25.059 15:46:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:25.059 15:46:55 -- host/auth.sh@68 -- # digest=sha512 00:28:25.059 15:46:55 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:28:25.059 15:46:55 -- host/auth.sh@68 -- # keyid=0 00:28:25.059 15:46:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:25.059 15:46:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:25.059 15:46:55 -- common/autotest_common.sh@10 -- # set +x 00:28:25.059 15:46:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:25.059 15:46:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:25.059 15:46:55 -- nvmf/common.sh@717 -- # local ip 00:28:25.059 15:46:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:25.059 15:46:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:25.059 15:46:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:25.059 15:46:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:25.059 15:46:55 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:25.059 15:46:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:25.059 15:46:55 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:25.059 15:46:55 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:25.059 15:46:55 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:25.059 15:46:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 00:28:25.059 15:46:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:25.059 15:46:55 -- common/autotest_common.sh@10 -- # set +x 00:28:25.626 nvme0n1 00:28:25.626 15:46:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:25.626 15:46:55 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:25.626 15:46:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:25.626 15:46:55 -- common/autotest_common.sh@10 -- # set +x 00:28:25.626 15:46:55 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:25.626 15:46:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:25.626 15:46:55 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:25.626 15:46:55 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:25.626 15:46:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:25.626 15:46:55 -- common/autotest_common.sh@10 -- # set +x 00:28:25.626 15:46:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:25.626 15:46:55 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:25.626 15:46:55 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:28:25.626 15:46:55 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:25.626 15:46:55 -- host/auth.sh@44 -- # digest=sha512 00:28:25.626 15:46:55 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:25.626 15:46:55 -- host/auth.sh@44 -- # keyid=1 00:28:25.626 15:46:55 -- host/auth.sh@45 -- # key=DHHC-1:00:Y2E1NjE3NGM4ZDZkOTRhOGJiMjc2ZjYzODRkNThhYTExN2RmMzJkYTE5YmM2OTU0UBZ03g==: 00:28:25.626 15:46:55 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:25.626 15:46:55 -- host/auth.sh@48 -- # echo ffdhe8192 00:28:25.626 15:46:55 -- host/auth.sh@49 -- # echo DHHC-1:00:Y2E1NjE3NGM4ZDZkOTRhOGJiMjc2ZjYzODRkNThhYTExN2RmMzJkYTE5YmM2OTU0UBZ03g==: 00:28:25.626 15:46:55 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 1 00:28:25.626 15:46:55 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:25.626 15:46:55 -- host/auth.sh@68 -- # digest=sha512 00:28:25.626 15:46:55 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:28:25.626 15:46:55 -- host/auth.sh@68 -- # keyid=1 00:28:25.626 15:46:55 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:25.627 15:46:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:25.627 15:46:55 -- common/autotest_common.sh@10 -- # set +x 00:28:25.627 15:46:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:25.885 15:46:55 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:25.885 15:46:55 -- nvmf/common.sh@717 -- # local ip 00:28:25.885 15:46:55 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:25.885 15:46:55 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:25.885 15:46:55 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:25.885 15:46:55 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:25.885 15:46:55 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:25.885 15:46:55 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:25.885 15:46:55 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:25.885 15:46:55 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:25.885 15:46:55 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:25.885 15:46:55 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 00:28:25.885 15:46:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:25.885 15:46:55 -- common/autotest_common.sh@10 -- # set +x 00:28:26.450 nvme0n1 00:28:26.450 15:46:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:26.450 15:46:56 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:26.450 15:46:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:26.450 15:46:56 -- common/autotest_common.sh@10 -- # set +x 00:28:26.450 15:46:56 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:26.450 15:46:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:26.450 15:46:56 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:26.450 15:46:56 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:26.450 15:46:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:26.450 15:46:56 -- common/autotest_common.sh@10 -- # set +x 00:28:26.450 15:46:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:26.450 15:46:56 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:26.450 15:46:56 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:28:26.450 15:46:56 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:26.450 15:46:56 -- host/auth.sh@44 -- # digest=sha512 00:28:26.450 15:46:56 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:26.450 15:46:56 -- host/auth.sh@44 -- # keyid=2 00:28:26.450 15:46:56 -- host/auth.sh@45 -- # key=DHHC-1:01:ZmE2MzM4MmM5NTJlZWUxMWFhYjFkYjhhNTQxNDhmZDLFP6Rf: 00:28:26.450 15:46:56 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:26.450 15:46:56 -- host/auth.sh@48 -- # echo ffdhe8192 00:28:26.450 15:46:56 -- host/auth.sh@49 -- # echo DHHC-1:01:ZmE2MzM4MmM5NTJlZWUxMWFhYjFkYjhhNTQxNDhmZDLFP6Rf: 00:28:26.450 15:46:56 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 2 00:28:26.450 15:46:56 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:26.450 15:46:56 -- host/auth.sh@68 -- # digest=sha512 00:28:26.450 15:46:56 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:28:26.450 15:46:56 -- host/auth.sh@68 -- # keyid=2 00:28:26.450 15:46:56 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:26.450 15:46:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:26.450 15:46:56 -- common/autotest_common.sh@10 -- # set +x 00:28:26.450 15:46:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:26.450 15:46:56 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:26.450 15:46:56 -- nvmf/common.sh@717 -- # local ip 00:28:26.450 15:46:56 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:26.450 15:46:56 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:26.450 15:46:56 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:26.450 15:46:56 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:26.450 15:46:56 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:26.450 15:46:56 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:26.450 15:46:56 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:26.450 15:46:56 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:26.450 15:46:56 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:26.450 15:46:56 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:26.450 15:46:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:26.450 15:46:56 -- common/autotest_common.sh@10 -- # set +x 00:28:27.016 nvme0n1 00:28:27.016 15:46:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:27.016 15:46:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:27.016 15:46:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:27.016 15:46:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:27.016 15:46:57 -- common/autotest_common.sh@10 -- # set +x 00:28:27.016 15:46:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:27.016 15:46:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:27.016 15:46:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:27.016 15:46:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:27.016 15:46:57 -- common/autotest_common.sh@10 -- # set +x 00:28:27.016 15:46:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:27.016 15:46:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:27.016 15:46:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:28:27.016 15:46:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:27.016 15:46:57 -- host/auth.sh@44 -- # digest=sha512 00:28:27.016 15:46:57 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:27.016 15:46:57 -- host/auth.sh@44 -- # keyid=3 00:28:27.016 15:46:57 -- host/auth.sh@45 -- # key=DHHC-1:02:Mjg4ZmIyZGFjNWY1YjVjZjc1MTg5Mzc0MzYzZTA3NjEzM2VkZDU4MTQzNmNkMWI1/bvVZw==: 00:28:27.016 15:46:57 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:27.016 15:46:57 -- host/auth.sh@48 -- # echo ffdhe8192 00:28:27.016 15:46:57 -- host/auth.sh@49 -- # echo DHHC-1:02:Mjg4ZmIyZGFjNWY1YjVjZjc1MTg5Mzc0MzYzZTA3NjEzM2VkZDU4MTQzNmNkMWI1/bvVZw==: 00:28:27.016 15:46:57 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 3 00:28:27.016 15:46:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:27.016 15:46:57 -- host/auth.sh@68 -- # digest=sha512 00:28:27.016 15:46:57 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:28:27.016 15:46:57 -- host/auth.sh@68 -- # keyid=3 00:28:27.016 15:46:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:27.016 15:46:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:27.016 15:46:57 -- common/autotest_common.sh@10 -- # set +x 00:28:27.016 15:46:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:27.016 15:46:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:27.016 15:46:57 -- nvmf/common.sh@717 -- # local ip 00:28:27.016 15:46:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:27.016 15:46:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:27.016 15:46:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:27.016 15:46:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:27.016 15:46:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:27.016 15:46:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:27.016 15:46:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:27.016 15:46:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:27.016 15:46:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:27.016 15:46:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 00:28:27.016 15:46:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:27.016 15:46:57 -- common/autotest_common.sh@10 -- # set +x 00:28:27.949 nvme0n1 00:28:27.949 15:46:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:27.949 15:46:57 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:27.949 15:46:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:27.949 15:46:57 -- common/autotest_common.sh@10 -- # set +x 00:28:27.949 15:46:57 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:27.949 15:46:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:27.949 15:46:57 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:27.949 15:46:57 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:27.949 15:46:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:27.949 15:46:57 -- common/autotest_common.sh@10 -- # set +x 00:28:27.949 15:46:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:27.949 15:46:57 -- host/auth.sh@109 -- # for keyid in "${!keys[@]}" 00:28:27.949 15:46:57 -- host/auth.sh@110 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:28:27.949 15:46:57 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:27.949 15:46:57 -- host/auth.sh@44 -- # digest=sha512 00:28:27.949 15:46:57 -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:27.949 15:46:57 -- host/auth.sh@44 -- # keyid=4 00:28:27.949 15:46:57 -- host/auth.sh@45 -- # key=DHHC-1:03:YzhlMTZkZDc5MGU2N2VkMDBhODBiNmM5YzdkNjdmNDU2MDcyNjM1YzE2YzE4NzFhNDFmYjM1MGMwODM3MjczMliWbZE=: 00:28:27.949 15:46:57 -- host/auth.sh@47 -- # echo 'hmac(sha512)' 00:28:27.949 15:46:57 -- host/auth.sh@48 -- # echo ffdhe8192 00:28:27.949 15:46:57 -- host/auth.sh@49 -- # echo DHHC-1:03:YzhlMTZkZDc5MGU2N2VkMDBhODBiNmM5YzdkNjdmNDU2MDcyNjM1YzE2YzE4NzFhNDFmYjM1MGMwODM3MjczMliWbZE=: 00:28:27.949 15:46:57 -- host/auth.sh@111 -- # connect_authenticate sha512 ffdhe8192 4 00:28:27.949 15:46:57 -- host/auth.sh@66 -- # local digest dhgroup keyid 00:28:27.949 15:46:57 -- host/auth.sh@68 -- # digest=sha512 00:28:27.949 15:46:57 -- host/auth.sh@68 -- # dhgroup=ffdhe8192 00:28:27.949 15:46:57 -- host/auth.sh@68 -- # keyid=4 00:28:27.949 15:46:57 -- host/auth.sh@69 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:27.949 15:46:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:27.949 15:46:57 -- common/autotest_common.sh@10 -- # set +x 00:28:27.949 15:46:57 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:27.949 15:46:57 -- host/auth.sh@70 -- # get_main_ns_ip 00:28:27.949 15:46:57 -- nvmf/common.sh@717 -- # local ip 00:28:27.949 15:46:57 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:27.949 15:46:57 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:27.949 15:46:57 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:27.949 15:46:57 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:27.949 15:46:57 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:27.949 15:46:57 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:27.949 15:46:57 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:27.949 15:46:57 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:27.949 15:46:57 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:27.949 15:46:57 -- host/auth.sh@70 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:27.949 15:46:57 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:27.949 15:46:57 -- common/autotest_common.sh@10 -- # set +x 00:28:28.515 nvme0n1 00:28:28.515 15:46:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:28.515 15:46:58 -- host/auth.sh@73 -- # rpc_cmd bdev_nvme_get_controllers 00:28:28.515 15:46:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:28.515 15:46:58 -- common/autotest_common.sh@10 -- # set +x 00:28:28.515 15:46:58 -- host/auth.sh@73 -- # jq -r '.[].name' 00:28:28.515 15:46:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:28.515 15:46:58 -- host/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:28.515 15:46:58 -- host/auth.sh@74 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:28.515 15:46:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:28.515 15:46:58 -- common/autotest_common.sh@10 -- # set +x 00:28:28.515 15:46:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:28.515 15:46:58 -- host/auth.sh@117 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:28.515 15:46:58 -- host/auth.sh@42 -- # local digest dhgroup keyid key 00:28:28.515 15:46:58 -- host/auth.sh@44 -- # digest=sha256 00:28:28.515 15:46:58 -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:28.515 15:46:58 -- host/auth.sh@44 -- # keyid=1 00:28:28.515 15:46:58 -- host/auth.sh@45 -- # key=DHHC-1:00:Y2E1NjE3NGM4ZDZkOTRhOGJiMjc2ZjYzODRkNThhYTExN2RmMzJkYTE5YmM2OTU0UBZ03g==: 00:28:28.515 15:46:58 -- host/auth.sh@47 -- # echo 'hmac(sha256)' 00:28:28.515 15:46:58 -- host/auth.sh@48 -- # echo ffdhe2048 00:28:28.515 15:46:58 -- host/auth.sh@49 -- # echo DHHC-1:00:Y2E1NjE3NGM4ZDZkOTRhOGJiMjc2ZjYzODRkNThhYTExN2RmMzJkYTE5YmM2OTU0UBZ03g==: 00:28:28.515 15:46:58 -- host/auth.sh@118 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:28.515 15:46:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:28.515 15:46:58 -- common/autotest_common.sh@10 -- # set +x 00:28:28.515 15:46:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:28.515 15:46:58 -- host/auth.sh@119 -- # get_main_ns_ip 00:28:28.515 15:46:58 -- nvmf/common.sh@717 -- # local ip 00:28:28.515 15:46:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:28.515 15:46:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:28.515 15:46:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:28.515 15:46:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:28.515 15:46:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:28.515 15:46:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:28.515 15:46:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:28.515 15:46:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:28.515 15:46:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:28.515 15:46:58 -- host/auth.sh@119 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:28.515 15:46:58 -- common/autotest_common.sh@638 -- # local es=0 00:28:28.515 15:46:58 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:28.515 15:46:58 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:28:28.515 15:46:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:28.515 15:46:58 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:28:28.515 15:46:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:28.515 15:46:58 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:28.515 15:46:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:28.515 15:46:58 -- common/autotest_common.sh@10 -- # set +x 00:28:28.515 2024/04/26 15:46:58 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:28:28.515 request: 00:28:28.515 { 00:28:28.515 "method": "bdev_nvme_attach_controller", 00:28:28.515 "params": { 00:28:28.515 "name": "nvme0", 00:28:28.515 "trtype": "tcp", 00:28:28.515 "traddr": "10.0.0.1", 00:28:28.515 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:28.515 "adrfam": "ipv4", 00:28:28.515 "trsvcid": "4420", 00:28:28.515 "subnqn": "nqn.2024-02.io.spdk:cnode0" 00:28:28.515 } 00:28:28.515 } 00:28:28.515 Got JSON-RPC error response 00:28:28.515 GoRPCClient: error on JSON-RPC call 00:28:28.515 15:46:58 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:28:28.515 15:46:58 -- common/autotest_common.sh@641 -- # es=1 00:28:28.515 15:46:58 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:28:28.515 15:46:58 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:28:28.515 15:46:58 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:28:28.515 15:46:58 -- host/auth.sh@121 -- # rpc_cmd bdev_nvme_get_controllers 00:28:28.515 15:46:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:28.515 15:46:58 -- common/autotest_common.sh@10 -- # set +x 00:28:28.515 15:46:58 -- host/auth.sh@121 -- # jq length 00:28:28.515 15:46:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:28.515 15:46:58 -- host/auth.sh@121 -- # (( 0 == 0 )) 00:28:28.515 15:46:58 -- host/auth.sh@124 -- # get_main_ns_ip 00:28:28.515 15:46:58 -- nvmf/common.sh@717 -- # local ip 00:28:28.515 15:46:58 -- nvmf/common.sh@718 -- # ip_candidates=() 00:28:28.515 15:46:58 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:28:28.516 15:46:58 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:28:28.516 15:46:58 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:28:28.516 15:46:58 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:28:28.516 15:46:58 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:28:28.516 15:46:58 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:28:28.516 15:46:58 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:28:28.516 15:46:58 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:28:28.516 15:46:58 -- host/auth.sh@124 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:28.516 15:46:58 -- common/autotest_common.sh@638 -- # local es=0 00:28:28.516 15:46:58 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:28.516 15:46:58 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:28:28.516 15:46:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:28.516 15:46:58 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:28:28.516 15:46:58 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:28:28.516 15:46:58 -- common/autotest_common.sh@641 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:28.516 15:46:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:28.516 15:46:58 -- common/autotest_common.sh@10 -- # set +x 00:28:28.516 2024/04/26 15:46:58 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 dhchap_key:key2 hostnqn:nqn.2024-02.io.spdk:host0 name:nvme0 subnqn:nqn.2024-02.io.spdk:cnode0 traddr:10.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:28:28.516 request: 00:28:28.516 { 00:28:28.516 "method": "bdev_nvme_attach_controller", 00:28:28.516 "params": { 00:28:28.516 "name": "nvme0", 00:28:28.516 "trtype": "tcp", 00:28:28.516 "traddr": "10.0.0.1", 00:28:28.516 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:28.516 "adrfam": "ipv4", 00:28:28.516 "trsvcid": "4420", 00:28:28.516 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:28.516 "dhchap_key": "key2" 00:28:28.516 } 00:28:28.516 } 00:28:28.516 Got JSON-RPC error response 00:28:28.516 GoRPCClient: error on JSON-RPC call 00:28:28.516 15:46:58 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:28:28.516 15:46:58 -- common/autotest_common.sh@641 -- # es=1 00:28:28.516 15:46:58 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:28:28.516 15:46:58 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:28:28.516 15:46:58 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:28:28.516 15:46:58 -- host/auth.sh@127 -- # rpc_cmd bdev_nvme_get_controllers 00:28:28.516 15:46:58 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:28.516 15:46:58 -- host/auth.sh@127 -- # jq length 00:28:28.516 15:46:58 -- common/autotest_common.sh@10 -- # set +x 00:28:28.516 15:46:58 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:28.774 15:46:58 -- host/auth.sh@127 -- # (( 0 == 0 )) 00:28:28.774 15:46:58 -- host/auth.sh@129 -- # trap - SIGINT SIGTERM EXIT 00:28:28.774 15:46:58 -- host/auth.sh@130 -- # cleanup 00:28:28.774 15:46:58 -- host/auth.sh@24 -- # nvmftestfini 00:28:28.774 15:46:58 -- nvmf/common.sh@477 -- # nvmfcleanup 00:28:28.774 15:46:58 -- nvmf/common.sh@117 -- # sync 00:28:28.774 15:46:58 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:28.774 15:46:58 -- nvmf/common.sh@120 -- # set +e 00:28:28.774 15:46:58 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:28.774 15:46:58 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:28.774 rmmod nvme_tcp 00:28:28.774 rmmod nvme_fabrics 00:28:28.774 15:46:58 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:28.774 15:46:58 -- nvmf/common.sh@124 -- # set -e 00:28:28.774 15:46:58 -- nvmf/common.sh@125 -- # return 0 00:28:28.774 15:46:58 -- nvmf/common.sh@478 -- # '[' -n 83720 ']' 00:28:28.774 15:46:58 -- nvmf/common.sh@479 -- # killprocess 83720 00:28:28.774 15:46:58 -- common/autotest_common.sh@936 -- # '[' -z 83720 ']' 00:28:28.774 15:46:58 -- common/autotest_common.sh@940 -- # kill -0 83720 00:28:28.774 15:46:58 -- common/autotest_common.sh@941 -- # uname 00:28:28.774 15:46:58 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:28.774 15:46:58 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 83720 00:28:28.774 15:46:58 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:28.774 15:46:58 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:28.774 killing process with pid 83720 00:28:28.774 15:46:58 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 83720' 00:28:28.774 15:46:58 -- common/autotest_common.sh@955 -- # kill 83720 00:28:28.774 15:46:58 -- common/autotest_common.sh@960 -- # wait 83720 00:28:29.032 15:46:59 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:28:29.032 15:46:59 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:28:29.032 15:46:59 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:28:29.032 15:46:59 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:29.032 15:46:59 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:29.032 15:46:59 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:29.032 15:46:59 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:29.032 15:46:59 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:29.032 15:46:59 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:28:29.032 15:46:59 -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:29.032 15:46:59 -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:29.032 15:46:59 -- host/auth.sh@27 -- # clean_kernel_target 00:28:29.032 15:46:59 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:28:29.032 15:46:59 -- nvmf/common.sh@675 -- # echo 0 00:28:29.032 15:46:59 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:29.032 15:46:59 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:29.032 15:46:59 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:29.032 15:46:59 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:29.032 15:46:59 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:28:29.032 15:46:59 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:28:29.032 15:46:59 -- nvmf/common.sh@687 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:29.598 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:29.855 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:28:29.855 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:28:29.855 15:47:00 -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.E1m /tmp/spdk.key-null.UEe /tmp/spdk.key-sha256.tpt /tmp/spdk.key-sha384.idD /tmp/spdk.key-sha512.P6P /home/vagrant/spdk_repo/spdk/../output/nvme-auth.log 00:28:29.855 15:47:00 -- host/auth.sh@31 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:28:30.420 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:28:30.420 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:28:30.420 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:28:30.420 00:28:30.420 real 0m39.158s 00:28:30.420 user 0m35.326s 00:28:30.420 sys 0m3.617s 00:28:30.420 15:47:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:30.420 ************************************ 00:28:30.420 END TEST nvmf_auth 00:28:30.420 ************************************ 00:28:30.420 15:47:00 -- common/autotest_common.sh@10 -- # set +x 00:28:30.420 15:47:00 -- nvmf/nvmf.sh@104 -- # [[ tcp == \t\c\p ]] 00:28:30.420 15:47:00 -- nvmf/nvmf.sh@105 -- # run_test nvmf_digest /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:30.420 15:47:00 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:28:30.420 15:47:00 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:30.420 15:47:00 -- common/autotest_common.sh@10 -- # set +x 00:28:30.421 ************************************ 00:28:30.421 START TEST nvmf_digest 00:28:30.421 ************************************ 00:28:30.421 15:47:00 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/digest.sh --transport=tcp 00:28:30.421 * Looking for test storage... 00:28:30.679 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:28:30.679 15:47:00 -- host/digest.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:28:30.679 15:47:00 -- nvmf/common.sh@7 -- # uname -s 00:28:30.679 15:47:00 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:30.679 15:47:00 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:30.679 15:47:00 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:30.679 15:47:00 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:30.679 15:47:00 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:30.679 15:47:00 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:30.679 15:47:00 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:30.679 15:47:00 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:30.679 15:47:00 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:30.679 15:47:00 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:30.679 15:47:00 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:28:30.679 15:47:00 -- nvmf/common.sh@18 -- # NVME_HOSTID=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:28:30.679 15:47:00 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:30.679 15:47:00 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:30.679 15:47:00 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:28:30.679 15:47:00 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:30.679 15:47:00 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:28:30.679 15:47:00 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:30.679 15:47:00 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:30.679 15:47:00 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:30.679 15:47:00 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.679 15:47:00 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.679 15:47:00 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.679 15:47:00 -- paths/export.sh@5 -- # export PATH 00:28:30.679 15:47:00 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:30.679 15:47:00 -- nvmf/common.sh@47 -- # : 0 00:28:30.679 15:47:00 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:30.679 15:47:00 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:30.679 15:47:00 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:30.679 15:47:00 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:30.679 15:47:00 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:30.679 15:47:00 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:30.679 15:47:00 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:30.679 15:47:00 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:30.679 15:47:00 -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:28:30.679 15:47:00 -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:28:30.679 15:47:00 -- host/digest.sh@16 -- # runtime=2 00:28:30.679 15:47:00 -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:28:30.679 15:47:00 -- host/digest.sh@138 -- # nvmftestinit 00:28:30.679 15:47:00 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:28:30.679 15:47:00 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:30.679 15:47:00 -- nvmf/common.sh@437 -- # prepare_net_devs 00:28:30.679 15:47:00 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:28:30.679 15:47:00 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:28:30.679 15:47:00 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:30.679 15:47:00 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:30.679 15:47:00 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:30.679 15:47:00 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:28:30.679 15:47:00 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:28:30.679 15:47:00 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:28:30.679 15:47:00 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:28:30.679 15:47:00 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:28:30.679 15:47:00 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:28:30.679 15:47:00 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:30.679 15:47:00 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:30.679 15:47:00 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:28:30.679 15:47:00 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:28:30.680 15:47:00 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:28:30.680 15:47:00 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:28:30.680 15:47:00 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:28:30.680 15:47:00 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:30.680 15:47:00 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:28:30.680 15:47:00 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:28:30.680 15:47:00 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:28:30.680 15:47:00 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:28:30.680 15:47:00 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:28:30.680 15:47:00 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:28:30.680 Cannot find device "nvmf_tgt_br" 00:28:30.680 15:47:00 -- nvmf/common.sh@155 -- # true 00:28:30.680 15:47:00 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:28:30.680 Cannot find device "nvmf_tgt_br2" 00:28:30.680 15:47:00 -- nvmf/common.sh@156 -- # true 00:28:30.680 15:47:00 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:28:30.680 15:47:00 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:28:30.680 Cannot find device "nvmf_tgt_br" 00:28:30.680 15:47:00 -- nvmf/common.sh@158 -- # true 00:28:30.680 15:47:00 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:28:30.680 Cannot find device "nvmf_tgt_br2" 00:28:30.680 15:47:00 -- nvmf/common.sh@159 -- # true 00:28:30.680 15:47:00 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:28:30.680 15:47:00 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:28:30.680 15:47:00 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:28:30.680 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:30.680 15:47:00 -- nvmf/common.sh@162 -- # true 00:28:30.680 15:47:00 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:28:30.680 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:28:30.680 15:47:00 -- nvmf/common.sh@163 -- # true 00:28:30.680 15:47:00 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:28:30.680 15:47:00 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:28:30.680 15:47:00 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:28:30.680 15:47:00 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:28:30.680 15:47:00 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:28:30.680 15:47:00 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:28:30.680 15:47:00 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:28:30.680 15:47:00 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:28:30.938 15:47:00 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:28:30.938 15:47:00 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:28:30.938 15:47:00 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:28:30.938 15:47:00 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:28:30.938 15:47:01 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:28:30.938 15:47:01 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:28:30.938 15:47:01 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:28:30.938 15:47:01 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:28:30.938 15:47:01 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:28:30.938 15:47:01 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:28:30.938 15:47:01 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:28:30.938 15:47:01 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:28:30.938 15:47:01 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:28:30.938 15:47:01 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:28:30.938 15:47:01 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:28:30.938 15:47:01 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:28:30.938 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:30.938 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.065 ms 00:28:30.938 00:28:30.938 --- 10.0.0.2 ping statistics --- 00:28:30.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:30.938 rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms 00:28:30.938 15:47:01 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:28:30.938 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:28:30.938 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.068 ms 00:28:30.938 00:28:30.938 --- 10.0.0.3 ping statistics --- 00:28:30.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:30.938 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:28:30.938 15:47:01 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:28:30.938 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:30.938 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.036 ms 00:28:30.938 00:28:30.938 --- 10.0.0.1 ping statistics --- 00:28:30.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:30.938 rtt min/avg/max/mdev = 0.036/0.036/0.036/0.000 ms 00:28:30.938 15:47:01 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:30.938 15:47:01 -- nvmf/common.sh@422 -- # return 0 00:28:30.938 15:47:01 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:28:30.938 15:47:01 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:30.938 15:47:01 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:28:30.938 15:47:01 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:28:30.938 15:47:01 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:30.938 15:47:01 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:28:30.938 15:47:01 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:28:30.938 15:47:01 -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:28:30.938 15:47:01 -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:28:30.938 15:47:01 -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:28:30.938 15:47:01 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:30.938 15:47:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:30.938 15:47:01 -- common/autotest_common.sh@10 -- # set +x 00:28:30.938 ************************************ 00:28:30.938 START TEST nvmf_digest_clean 00:28:30.938 ************************************ 00:28:30.938 15:47:01 -- common/autotest_common.sh@1111 -- # run_digest 00:28:30.938 15:47:01 -- host/digest.sh@120 -- # local dsa_initiator 00:28:30.938 15:47:01 -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:28:30.938 15:47:01 -- host/digest.sh@121 -- # dsa_initiator=false 00:28:30.938 15:47:01 -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:28:30.938 15:47:01 -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:28:30.938 15:47:01 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:28:30.938 15:47:01 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:30.938 15:47:01 -- common/autotest_common.sh@10 -- # set +x 00:28:30.938 15:47:01 -- nvmf/common.sh@470 -- # nvmfpid=85345 00:28:30.938 15:47:01 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:30.938 15:47:01 -- nvmf/common.sh@471 -- # waitforlisten 85345 00:28:30.938 15:47:01 -- common/autotest_common.sh@817 -- # '[' -z 85345 ']' 00:28:30.938 15:47:01 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:30.938 15:47:01 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:30.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:30.938 15:47:01 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:30.938 15:47:01 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:30.938 15:47:01 -- common/autotest_common.sh@10 -- # set +x 00:28:31.196 [2024-04-26 15:47:01.278153] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:28:31.196 [2024-04-26 15:47:01.278234] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:31.196 [2024-04-26 15:47:01.414906] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:31.454 [2024-04-26 15:47:01.550096] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:31.454 [2024-04-26 15:47:01.550165] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:31.454 [2024-04-26 15:47:01.550180] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:31.454 [2024-04-26 15:47:01.550190] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:31.454 [2024-04-26 15:47:01.550199] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:31.454 [2024-04-26 15:47:01.550244] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:32.020 15:47:02 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:32.020 15:47:02 -- common/autotest_common.sh@850 -- # return 0 00:28:32.020 15:47:02 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:28:32.020 15:47:02 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:32.020 15:47:02 -- common/autotest_common.sh@10 -- # set +x 00:28:32.278 15:47:02 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:32.278 15:47:02 -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:28:32.278 15:47:02 -- host/digest.sh@126 -- # common_target_config 00:28:32.278 15:47:02 -- host/digest.sh@43 -- # rpc_cmd 00:28:32.278 15:47:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:32.279 15:47:02 -- common/autotest_common.sh@10 -- # set +x 00:28:32.279 null0 00:28:32.279 [2024-04-26 15:47:02.462419] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:32.279 [2024-04-26 15:47:02.486502] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:32.279 15:47:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:32.279 15:47:02 -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:28:32.279 15:47:02 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:32.279 15:47:02 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:32.279 15:47:02 -- host/digest.sh@80 -- # rw=randread 00:28:32.279 15:47:02 -- host/digest.sh@80 -- # bs=4096 00:28:32.279 15:47:02 -- host/digest.sh@80 -- # qd=128 00:28:32.279 15:47:02 -- host/digest.sh@80 -- # scan_dsa=false 00:28:32.279 15:47:02 -- host/digest.sh@83 -- # bperfpid=85405 00:28:32.279 15:47:02 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:32.279 15:47:02 -- host/digest.sh@84 -- # waitforlisten 85405 /var/tmp/bperf.sock 00:28:32.279 15:47:02 -- common/autotest_common.sh@817 -- # '[' -z 85405 ']' 00:28:32.279 15:47:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:32.279 15:47:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:32.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:32.279 15:47:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:32.279 15:47:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:32.279 15:47:02 -- common/autotest_common.sh@10 -- # set +x 00:28:32.279 [2024-04-26 15:47:02.540796] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:28:32.279 [2024-04-26 15:47:02.540934] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85405 ] 00:28:32.537 [2024-04-26 15:47:02.678239] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:32.537 [2024-04-26 15:47:02.801419] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:33.470 15:47:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:33.470 15:47:03 -- common/autotest_common.sh@850 -- # return 0 00:28:33.470 15:47:03 -- host/digest.sh@86 -- # false 00:28:33.470 15:47:03 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:33.470 15:47:03 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:33.728 15:47:03 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:33.728 15:47:03 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:34.292 nvme0n1 00:28:34.292 15:47:04 -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:34.292 15:47:04 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:34.292 Running I/O for 2 seconds... 00:28:36.214 00:28:36.214 Latency(us) 00:28:36.214 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:36.214 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:36.214 nvme0n1 : 2.00 18793.88 73.41 0.00 0.00 6802.77 2919.33 11200.70 00:28:36.214 =================================================================================================================== 00:28:36.214 Total : 18793.88 73.41 0.00 0.00 6802.77 2919.33 11200.70 00:28:36.214 0 00:28:36.214 15:47:06 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:36.214 15:47:06 -- host/digest.sh@93 -- # get_accel_stats 00:28:36.214 15:47:06 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:36.214 15:47:06 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:36.214 15:47:06 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:36.214 | select(.opcode=="crc32c") 00:28:36.214 | "\(.module_name) \(.executed)"' 00:28:36.473 15:47:06 -- host/digest.sh@94 -- # false 00:28:36.473 15:47:06 -- host/digest.sh@94 -- # exp_module=software 00:28:36.473 15:47:06 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:36.473 15:47:06 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:36.473 15:47:06 -- host/digest.sh@98 -- # killprocess 85405 00:28:36.473 15:47:06 -- common/autotest_common.sh@936 -- # '[' -z 85405 ']' 00:28:36.473 15:47:06 -- common/autotest_common.sh@940 -- # kill -0 85405 00:28:36.473 15:47:06 -- common/autotest_common.sh@941 -- # uname 00:28:36.473 15:47:06 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:36.473 15:47:06 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85405 00:28:36.732 15:47:06 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:28:36.732 15:47:06 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:28:36.732 killing process with pid 85405 00:28:36.732 15:47:06 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85405' 00:28:36.732 Received shutdown signal, test time was about 2.000000 seconds 00:28:36.732 00:28:36.732 Latency(us) 00:28:36.732 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:36.732 =================================================================================================================== 00:28:36.732 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:36.732 15:47:06 -- common/autotest_common.sh@955 -- # kill 85405 00:28:36.732 15:47:06 -- common/autotest_common.sh@960 -- # wait 85405 00:28:36.732 15:47:07 -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:28:36.732 15:47:07 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:36.732 15:47:07 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:36.732 15:47:07 -- host/digest.sh@80 -- # rw=randread 00:28:36.732 15:47:07 -- host/digest.sh@80 -- # bs=131072 00:28:36.732 15:47:07 -- host/digest.sh@80 -- # qd=16 00:28:36.732 15:47:07 -- host/digest.sh@80 -- # scan_dsa=false 00:28:36.732 15:47:07 -- host/digest.sh@83 -- # bperfpid=85491 00:28:36.732 15:47:07 -- host/digest.sh@84 -- # waitforlisten 85491 /var/tmp/bperf.sock 00:28:36.732 15:47:07 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:36.732 15:47:07 -- common/autotest_common.sh@817 -- # '[' -z 85491 ']' 00:28:36.732 15:47:07 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:36.732 15:47:07 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:36.732 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:36.732 15:47:07 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:36.991 15:47:07 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:36.991 15:47:07 -- common/autotest_common.sh@10 -- # set +x 00:28:36.991 [2024-04-26 15:47:07.080669] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:28:36.991 [2024-04-26 15:47:07.080781] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85491 ] 00:28:36.991 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:36.991 Zero copy mechanism will not be used. 00:28:36.991 [2024-04-26 15:47:07.219490] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:37.249 [2024-04-26 15:47:07.339928] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:37.817 15:47:08 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:37.817 15:47:08 -- common/autotest_common.sh@850 -- # return 0 00:28:37.817 15:47:08 -- host/digest.sh@86 -- # false 00:28:37.817 15:47:08 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:37.817 15:47:08 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:38.382 15:47:08 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:38.382 15:47:08 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:38.640 nvme0n1 00:28:38.640 15:47:08 -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:38.640 15:47:08 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:38.897 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:38.897 Zero copy mechanism will not be used. 00:28:38.897 Running I/O for 2 seconds... 00:28:40.796 00:28:40.796 Latency(us) 00:28:40.796 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:40.796 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:28:40.796 nvme0n1 : 2.00 7523.79 940.47 0.00 0.00 2123.11 651.64 5808.87 00:28:40.796 =================================================================================================================== 00:28:40.796 Total : 7523.79 940.47 0.00 0.00 2123.11 651.64 5808.87 00:28:40.796 0 00:28:40.796 15:47:10 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:40.796 15:47:11 -- host/digest.sh@93 -- # get_accel_stats 00:28:40.796 15:47:11 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:40.796 15:47:11 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:40.796 | select(.opcode=="crc32c") 00:28:40.796 | "\(.module_name) \(.executed)"' 00:28:40.797 15:47:11 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:41.054 15:47:11 -- host/digest.sh@94 -- # false 00:28:41.054 15:47:11 -- host/digest.sh@94 -- # exp_module=software 00:28:41.054 15:47:11 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:41.054 15:47:11 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:41.054 15:47:11 -- host/digest.sh@98 -- # killprocess 85491 00:28:41.054 15:47:11 -- common/autotest_common.sh@936 -- # '[' -z 85491 ']' 00:28:41.054 15:47:11 -- common/autotest_common.sh@940 -- # kill -0 85491 00:28:41.054 15:47:11 -- common/autotest_common.sh@941 -- # uname 00:28:41.054 15:47:11 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:41.054 15:47:11 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85491 00:28:41.054 15:47:11 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:28:41.054 15:47:11 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:28:41.054 15:47:11 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85491' 00:28:41.054 killing process with pid 85491 00:28:41.054 15:47:11 -- common/autotest_common.sh@955 -- # kill 85491 00:28:41.054 Received shutdown signal, test time was about 2.000000 seconds 00:28:41.054 00:28:41.054 Latency(us) 00:28:41.054 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:41.054 =================================================================================================================== 00:28:41.054 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:41.054 15:47:11 -- common/autotest_common.sh@960 -- # wait 85491 00:28:41.620 15:47:11 -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:28:41.620 15:47:11 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:41.620 15:47:11 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:41.620 15:47:11 -- host/digest.sh@80 -- # rw=randwrite 00:28:41.620 15:47:11 -- host/digest.sh@80 -- # bs=4096 00:28:41.620 15:47:11 -- host/digest.sh@80 -- # qd=128 00:28:41.620 15:47:11 -- host/digest.sh@80 -- # scan_dsa=false 00:28:41.620 15:47:11 -- host/digest.sh@83 -- # bperfpid=85589 00:28:41.620 15:47:11 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:28:41.620 15:47:11 -- host/digest.sh@84 -- # waitforlisten 85589 /var/tmp/bperf.sock 00:28:41.620 15:47:11 -- common/autotest_common.sh@817 -- # '[' -z 85589 ']' 00:28:41.620 15:47:11 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:41.620 15:47:11 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:41.620 15:47:11 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:41.620 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:41.620 15:47:11 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:41.620 15:47:11 -- common/autotest_common.sh@10 -- # set +x 00:28:41.620 [2024-04-26 15:47:11.750116] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:28:41.620 [2024-04-26 15:47:11.750236] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85589 ] 00:28:41.620 [2024-04-26 15:47:11.888771] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:41.879 [2024-04-26 15:47:12.041403] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:42.810 15:47:12 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:42.810 15:47:12 -- common/autotest_common.sh@850 -- # return 0 00:28:42.810 15:47:12 -- host/digest.sh@86 -- # false 00:28:42.810 15:47:12 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:42.811 15:47:12 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:43.088 15:47:13 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:43.088 15:47:13 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:43.365 nvme0n1 00:28:43.365 15:47:13 -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:43.366 15:47:13 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:43.366 Running I/O for 2 seconds... 00:28:45.265 00:28:45.265 Latency(us) 00:28:45.265 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:45.265 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:28:45.265 nvme0n1 : 2.01 22026.94 86.04 0.00 0.00 5805.06 2353.34 15490.33 00:28:45.265 =================================================================================================================== 00:28:45.265 Total : 22026.94 86.04 0.00 0.00 5805.06 2353.34 15490.33 00:28:45.265 0 00:28:45.523 15:47:15 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:45.523 15:47:15 -- host/digest.sh@93 -- # get_accel_stats 00:28:45.523 15:47:15 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:45.523 15:47:15 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:45.523 | select(.opcode=="crc32c") 00:28:45.523 | "\(.module_name) \(.executed)"' 00:28:45.523 15:47:15 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:45.781 15:47:15 -- host/digest.sh@94 -- # false 00:28:45.781 15:47:15 -- host/digest.sh@94 -- # exp_module=software 00:28:45.781 15:47:15 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:45.781 15:47:15 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:45.781 15:47:15 -- host/digest.sh@98 -- # killprocess 85589 00:28:45.781 15:47:15 -- common/autotest_common.sh@936 -- # '[' -z 85589 ']' 00:28:45.781 15:47:15 -- common/autotest_common.sh@940 -- # kill -0 85589 00:28:45.781 15:47:15 -- common/autotest_common.sh@941 -- # uname 00:28:45.781 15:47:15 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:45.781 15:47:15 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85589 00:28:45.781 15:47:15 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:28:45.782 killing process with pid 85589 00:28:45.782 15:47:15 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:28:45.782 15:47:15 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85589' 00:28:45.782 15:47:15 -- common/autotest_common.sh@955 -- # kill 85589 00:28:45.782 Received shutdown signal, test time was about 2.000000 seconds 00:28:45.782 00:28:45.782 Latency(us) 00:28:45.782 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:45.782 =================================================================================================================== 00:28:45.782 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:45.782 15:47:15 -- common/autotest_common.sh@960 -- # wait 85589 00:28:46.040 15:47:16 -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:28:46.040 15:47:16 -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:28:46.040 15:47:16 -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:28:46.040 15:47:16 -- host/digest.sh@80 -- # rw=randwrite 00:28:46.040 15:47:16 -- host/digest.sh@80 -- # bs=131072 00:28:46.040 15:47:16 -- host/digest.sh@80 -- # qd=16 00:28:46.040 15:47:16 -- host/digest.sh@80 -- # scan_dsa=false 00:28:46.040 15:47:16 -- host/digest.sh@83 -- # bperfpid=85680 00:28:46.040 15:47:16 -- host/digest.sh@84 -- # waitforlisten 85680 /var/tmp/bperf.sock 00:28:46.040 15:47:16 -- host/digest.sh@82 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:28:46.040 15:47:16 -- common/autotest_common.sh@817 -- # '[' -z 85680 ']' 00:28:46.040 15:47:16 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:46.040 15:47:16 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:46.040 15:47:16 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:46.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:46.040 15:47:16 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:46.041 15:47:16 -- common/autotest_common.sh@10 -- # set +x 00:28:46.041 [2024-04-26 15:47:16.233606] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:28:46.041 [2024-04-26 15:47:16.233766] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85680 ] 00:28:46.041 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:46.041 Zero copy mechanism will not be used. 00:28:46.299 [2024-04-26 15:47:16.383414] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:46.299 [2024-04-26 15:47:16.501754] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:47.234 15:47:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:47.234 15:47:17 -- common/autotest_common.sh@850 -- # return 0 00:28:47.234 15:47:17 -- host/digest.sh@86 -- # false 00:28:47.234 15:47:17 -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:28:47.234 15:47:17 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:28:47.493 15:47:17 -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:47.493 15:47:17 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:47.750 nvme0n1 00:28:48.008 15:47:18 -- host/digest.sh@92 -- # bperf_py perform_tests 00:28:48.008 15:47:18 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:48.008 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:48.008 Zero copy mechanism will not be used. 00:28:48.008 Running I/O for 2 seconds... 00:28:49.909 00:28:49.909 Latency(us) 00:28:49.909 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:49.909 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:28:49.909 nvme0n1 : 2.00 6482.95 810.37 0.00 0.00 2462.36 1899.05 9651.67 00:28:49.909 =================================================================================================================== 00:28:49.909 Total : 6482.95 810.37 0.00 0.00 2462.36 1899.05 9651.67 00:28:49.909 0 00:28:49.909 15:47:20 -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:28:49.909 15:47:20 -- host/digest.sh@93 -- # get_accel_stats 00:28:49.909 15:47:20 -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:28:49.909 15:47:20 -- host/digest.sh@37 -- # jq -rc '.operations[] 00:28:49.909 | select(.opcode=="crc32c") 00:28:49.909 | "\(.module_name) \(.executed)"' 00:28:49.909 15:47:20 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:28:50.167 15:47:20 -- host/digest.sh@94 -- # false 00:28:50.167 15:47:20 -- host/digest.sh@94 -- # exp_module=software 00:28:50.167 15:47:20 -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:28:50.167 15:47:20 -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:28:50.167 15:47:20 -- host/digest.sh@98 -- # killprocess 85680 00:28:50.167 15:47:20 -- common/autotest_common.sh@936 -- # '[' -z 85680 ']' 00:28:50.167 15:47:20 -- common/autotest_common.sh@940 -- # kill -0 85680 00:28:50.167 15:47:20 -- common/autotest_common.sh@941 -- # uname 00:28:50.167 15:47:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:50.167 15:47:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85680 00:28:50.491 15:47:20 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:28:50.491 15:47:20 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:28:50.491 killing process with pid 85680 00:28:50.491 15:47:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85680' 00:28:50.491 Received shutdown signal, test time was about 2.000000 seconds 00:28:50.491 00:28:50.491 Latency(us) 00:28:50.491 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:50.491 =================================================================================================================== 00:28:50.491 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:50.491 15:47:20 -- common/autotest_common.sh@955 -- # kill 85680 00:28:50.491 15:47:20 -- common/autotest_common.sh@960 -- # wait 85680 00:28:50.491 15:47:20 -- host/digest.sh@132 -- # killprocess 85345 00:28:50.491 15:47:20 -- common/autotest_common.sh@936 -- # '[' -z 85345 ']' 00:28:50.491 15:47:20 -- common/autotest_common.sh@940 -- # kill -0 85345 00:28:50.491 15:47:20 -- common/autotest_common.sh@941 -- # uname 00:28:50.491 15:47:20 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:50.491 15:47:20 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85345 00:28:50.491 15:47:20 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:28:50.491 15:47:20 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:28:50.491 killing process with pid 85345 00:28:50.491 15:47:20 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85345' 00:28:50.491 15:47:20 -- common/autotest_common.sh@955 -- # kill 85345 00:28:50.491 15:47:20 -- common/autotest_common.sh@960 -- # wait 85345 00:28:51.058 00:28:51.058 real 0m19.870s 00:28:51.058 user 0m38.289s 00:28:51.058 sys 0m4.784s 00:28:51.058 15:47:21 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:28:51.058 15:47:21 -- common/autotest_common.sh@10 -- # set +x 00:28:51.058 ************************************ 00:28:51.058 END TEST nvmf_digest_clean 00:28:51.058 ************************************ 00:28:51.058 15:47:21 -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:28:51.058 15:47:21 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:28:51.058 15:47:21 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:28:51.058 15:47:21 -- common/autotest_common.sh@10 -- # set +x 00:28:51.058 ************************************ 00:28:51.058 START TEST nvmf_digest_error 00:28:51.058 ************************************ 00:28:51.058 15:47:21 -- common/autotest_common.sh@1111 -- # run_digest_error 00:28:51.058 15:47:21 -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:28:51.058 15:47:21 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:28:51.058 15:47:21 -- common/autotest_common.sh@710 -- # xtrace_disable 00:28:51.058 15:47:21 -- common/autotest_common.sh@10 -- # set +x 00:28:51.058 15:47:21 -- nvmf/common.sh@470 -- # nvmfpid=85803 00:28:51.058 15:47:21 -- nvmf/common.sh@471 -- # waitforlisten 85803 00:28:51.058 15:47:21 -- common/autotest_common.sh@817 -- # '[' -z 85803 ']' 00:28:51.058 15:47:21 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:28:51.058 15:47:21 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:51.058 15:47:21 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:51.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:51.058 15:47:21 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:51.058 15:47:21 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:51.058 15:47:21 -- common/autotest_common.sh@10 -- # set +x 00:28:51.058 [2024-04-26 15:47:21.270378] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:28:51.058 [2024-04-26 15:47:21.270506] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:51.316 [2024-04-26 15:47:21.408920] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:51.316 [2024-04-26 15:47:21.562732] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:51.316 [2024-04-26 15:47:21.562823] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:51.316 [2024-04-26 15:47:21.562837] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:51.316 [2024-04-26 15:47:21.562847] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:51.316 [2024-04-26 15:47:21.562855] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:51.316 [2024-04-26 15:47:21.562903] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:28:52.250 15:47:22 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:52.250 15:47:22 -- common/autotest_common.sh@850 -- # return 0 00:28:52.250 15:47:22 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:28:52.250 15:47:22 -- common/autotest_common.sh@716 -- # xtrace_disable 00:28:52.250 15:47:22 -- common/autotest_common.sh@10 -- # set +x 00:28:52.250 15:47:22 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:52.250 15:47:22 -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:28:52.250 15:47:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:52.250 15:47:22 -- common/autotest_common.sh@10 -- # set +x 00:28:52.250 [2024-04-26 15:47:22.271556] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:28:52.250 15:47:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:52.250 15:47:22 -- host/digest.sh@105 -- # common_target_config 00:28:52.250 15:47:22 -- host/digest.sh@43 -- # rpc_cmd 00:28:52.251 15:47:22 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:52.251 15:47:22 -- common/autotest_common.sh@10 -- # set +x 00:28:52.251 null0 00:28:52.251 [2024-04-26 15:47:22.418752] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:52.251 [2024-04-26 15:47:22.443005] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:52.251 15:47:22 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:52.251 15:47:22 -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:28:52.251 15:47:22 -- host/digest.sh@54 -- # local rw bs qd 00:28:52.251 15:47:22 -- host/digest.sh@56 -- # rw=randread 00:28:52.251 15:47:22 -- host/digest.sh@56 -- # bs=4096 00:28:52.251 15:47:22 -- host/digest.sh@56 -- # qd=128 00:28:52.251 15:47:22 -- host/digest.sh@58 -- # bperfpid=85847 00:28:52.251 15:47:22 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:28:52.251 15:47:22 -- host/digest.sh@60 -- # waitforlisten 85847 /var/tmp/bperf.sock 00:28:52.251 15:47:22 -- common/autotest_common.sh@817 -- # '[' -z 85847 ']' 00:28:52.251 15:47:22 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:52.251 15:47:22 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:52.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:52.251 15:47:22 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:52.251 15:47:22 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:52.251 15:47:22 -- common/autotest_common.sh@10 -- # set +x 00:28:52.251 [2024-04-26 15:47:22.516891] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:28:52.251 [2024-04-26 15:47:22.517031] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85847 ] 00:28:52.509 [2024-04-26 15:47:22.661206] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:52.509 [2024-04-26 15:47:22.789034] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:53.443 15:47:23 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:53.443 15:47:23 -- common/autotest_common.sh@850 -- # return 0 00:28:53.443 15:47:23 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:53.443 15:47:23 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:53.443 15:47:23 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:53.700 15:47:23 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:53.700 15:47:23 -- common/autotest_common.sh@10 -- # set +x 00:28:53.700 15:47:23 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:53.700 15:47:23 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:53.700 15:47:23 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:53.957 nvme0n1 00:28:53.957 15:47:24 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:28:53.957 15:47:24 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:53.957 15:47:24 -- common/autotest_common.sh@10 -- # set +x 00:28:53.957 15:47:24 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:53.957 15:47:24 -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:53.957 15:47:24 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:53.957 Running I/O for 2 seconds... 00:28:53.957 [2024-04-26 15:47:24.228279] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:53.957 [2024-04-26 15:47:24.228358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.957 [2024-04-26 15:47:24.228375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:53.957 [2024-04-26 15:47:24.239527] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:53.957 [2024-04-26 15:47:24.239568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:8079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:53.957 [2024-04-26 15:47:24.239582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.215 [2024-04-26 15:47:24.253691] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.215 [2024-04-26 15:47:24.253746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:39 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.215 [2024-04-26 15:47:24.253761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.215 [2024-04-26 15:47:24.266694] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.215 [2024-04-26 15:47:24.266745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20554 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.215 [2024-04-26 15:47:24.266760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.215 [2024-04-26 15:47:24.280789] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.215 [2024-04-26 15:47:24.280845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:7878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.215 [2024-04-26 15:47:24.280859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.215 [2024-04-26 15:47:24.295511] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.215 [2024-04-26 15:47:24.295562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:8406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.215 [2024-04-26 15:47:24.295577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.215 [2024-04-26 15:47:24.307863] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.215 [2024-04-26 15:47:24.307920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.215 [2024-04-26 15:47:24.307934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.215 [2024-04-26 15:47:24.320545] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.215 [2024-04-26 15:47:24.320595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:19268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.215 [2024-04-26 15:47:24.320609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.215 [2024-04-26 15:47:24.334282] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.215 [2024-04-26 15:47:24.334337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:2332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.215 [2024-04-26 15:47:24.334352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.215 [2024-04-26 15:47:24.346589] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.215 [2024-04-26 15:47:24.346641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.215 [2024-04-26 15:47:24.346656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.215 [2024-04-26 15:47:24.359150] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.215 [2024-04-26 15:47:24.359196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14606 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.215 [2024-04-26 15:47:24.359210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.215 [2024-04-26 15:47:24.371193] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.215 [2024-04-26 15:47:24.371239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.215 [2024-04-26 15:47:24.371254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.215 [2024-04-26 15:47:24.385547] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.215 [2024-04-26 15:47:24.385604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:9322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.215 [2024-04-26 15:47:24.385619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.215 [2024-04-26 15:47:24.396560] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.215 [2024-04-26 15:47:24.396604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:2464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.215 [2024-04-26 15:47:24.396617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.215 [2024-04-26 15:47:24.408504] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.215 [2024-04-26 15:47:24.408552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:19732 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.215 [2024-04-26 15:47:24.408567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.215 [2024-04-26 15:47:24.424481] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.215 [2024-04-26 15:47:24.424547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.215 [2024-04-26 15:47:24.424562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.215 [2024-04-26 15:47:24.435644] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.215 [2024-04-26 15:47:24.435695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:6307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.215 [2024-04-26 15:47:24.435710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.215 [2024-04-26 15:47:24.450880] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.215 [2024-04-26 15:47:24.450942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:18770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.215 [2024-04-26 15:47:24.450957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.215 [2024-04-26 15:47:24.462492] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.215 [2024-04-26 15:47:24.462547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:22992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.215 [2024-04-26 15:47:24.462561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.215 [2024-04-26 15:47:24.478193] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.215 [2024-04-26 15:47:24.478260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:174 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.215 [2024-04-26 15:47:24.478276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.215 [2024-04-26 15:47:24.491159] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.215 [2024-04-26 15:47:24.491214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:14569 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.215 [2024-04-26 15:47:24.491229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.215 [2024-04-26 15:47:24.504598] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.215 [2024-04-26 15:47:24.504657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:5927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.215 [2024-04-26 15:47:24.504672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.474 [2024-04-26 15:47:24.515780] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.474 [2024-04-26 15:47:24.515826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.474 [2024-04-26 15:47:24.515841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.474 [2024-04-26 15:47:24.530033] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.474 [2024-04-26 15:47:24.530087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:12138 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.474 [2024-04-26 15:47:24.530110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.474 [2024-04-26 15:47:24.544685] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.474 [2024-04-26 15:47:24.544738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:3474 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.474 [2024-04-26 15:47:24.544754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.474 [2024-04-26 15:47:24.558297] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.474 [2024-04-26 15:47:24.558341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:25070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.474 [2024-04-26 15:47:24.558355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.474 [2024-04-26 15:47:24.570680] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.474 [2024-04-26 15:47:24.570723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:4490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.474 [2024-04-26 15:47:24.570737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.474 [2024-04-26 15:47:24.584767] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.474 [2024-04-26 15:47:24.584813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:10723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.474 [2024-04-26 15:47:24.584828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.474 [2024-04-26 15:47:24.598877] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.474 [2024-04-26 15:47:24.598934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:6688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.474 [2024-04-26 15:47:24.598950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.474 [2024-04-26 15:47:24.611793] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.474 [2024-04-26 15:47:24.611841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.474 [2024-04-26 15:47:24.611855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.474 [2024-04-26 15:47:24.625614] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.474 [2024-04-26 15:47:24.625671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:17166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.474 [2024-04-26 15:47:24.625686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.474 [2024-04-26 15:47:24.640358] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.474 [2024-04-26 15:47:24.640410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:24367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.474 [2024-04-26 15:47:24.640424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.474 [2024-04-26 15:47:24.653429] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.474 [2024-04-26 15:47:24.653471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:10054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.474 [2024-04-26 15:47:24.653485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.474 [2024-04-26 15:47:24.665729] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.474 [2024-04-26 15:47:24.665780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:15104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.474 [2024-04-26 15:47:24.665795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.474 [2024-04-26 15:47:24.680567] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.474 [2024-04-26 15:47:24.680616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:21080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.474 [2024-04-26 15:47:24.680630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.474 [2024-04-26 15:47:24.692013] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.474 [2024-04-26 15:47:24.692055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21526 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.474 [2024-04-26 15:47:24.692069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.474 [2024-04-26 15:47:24.705741] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.474 [2024-04-26 15:47:24.705786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:3305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.474 [2024-04-26 15:47:24.705799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.474 [2024-04-26 15:47:24.719035] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.474 [2024-04-26 15:47:24.719076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:8794 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.474 [2024-04-26 15:47:24.719090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.474 [2024-04-26 15:47:24.732243] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.474 [2024-04-26 15:47:24.732299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.474 [2024-04-26 15:47:24.732313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.474 [2024-04-26 15:47:24.745804] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.474 [2024-04-26 15:47:24.745861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:2319 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.474 [2024-04-26 15:47:24.745875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.474 [2024-04-26 15:47:24.758026] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.474 [2024-04-26 15:47:24.758072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:21530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.474 [2024-04-26 15:47:24.758086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.733 [2024-04-26 15:47:24.770283] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.733 [2024-04-26 15:47:24.770326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.733 [2024-04-26 15:47:24.770341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.733 [2024-04-26 15:47:24.784788] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.733 [2024-04-26 15:47:24.784837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:23893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.733 [2024-04-26 15:47:24.784851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.733 [2024-04-26 15:47:24.797996] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.733 [2024-04-26 15:47:24.798046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7234 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.733 [2024-04-26 15:47:24.798060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.733 [2024-04-26 15:47:24.811744] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.733 [2024-04-26 15:47:24.811793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:7456 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.733 [2024-04-26 15:47:24.811808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.733 [2024-04-26 15:47:24.825338] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.733 [2024-04-26 15:47:24.825386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:3197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.733 [2024-04-26 15:47:24.825401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.733 [2024-04-26 15:47:24.838350] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.733 [2024-04-26 15:47:24.838392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.733 [2024-04-26 15:47:24.838406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.733 [2024-04-26 15:47:24.852490] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.733 [2024-04-26 15:47:24.852537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:19334 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.733 [2024-04-26 15:47:24.852551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.733 [2024-04-26 15:47:24.867293] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.733 [2024-04-26 15:47:24.867350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:23915 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.733 [2024-04-26 15:47:24.867365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.733 [2024-04-26 15:47:24.880366] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.733 [2024-04-26 15:47:24.880412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:21110 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.733 [2024-04-26 15:47:24.880426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.733 [2024-04-26 15:47:24.892431] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.733 [2024-04-26 15:47:24.892474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:25245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.733 [2024-04-26 15:47:24.892489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.733 [2024-04-26 15:47:24.905441] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.733 [2024-04-26 15:47:24.905487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:12899 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.733 [2024-04-26 15:47:24.905502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.733 [2024-04-26 15:47:24.919189] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.733 [2024-04-26 15:47:24.919248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:23970 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.733 [2024-04-26 15:47:24.919262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.733 [2024-04-26 15:47:24.931774] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.733 [2024-04-26 15:47:24.931832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:16461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.733 [2024-04-26 15:47:24.931847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.733 [2024-04-26 15:47:24.943748] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.733 [2024-04-26 15:47:24.943797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:18297 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.733 [2024-04-26 15:47:24.943812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.733 [2024-04-26 15:47:24.957757] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.733 [2024-04-26 15:47:24.957802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:18448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.733 [2024-04-26 15:47:24.957816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.733 [2024-04-26 15:47:24.971523] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.733 [2024-04-26 15:47:24.971568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:2966 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.733 [2024-04-26 15:47:24.971582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.733 [2024-04-26 15:47:24.985821] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.733 [2024-04-26 15:47:24.985866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:3666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.733 [2024-04-26 15:47:24.985880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.733 [2024-04-26 15:47:24.997774] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.733 [2024-04-26 15:47:24.997816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:1358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.733 [2024-04-26 15:47:24.997829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.733 [2024-04-26 15:47:25.011854] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.733 [2024-04-26 15:47:25.011896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:9589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.733 [2024-04-26 15:47:25.011910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.992 [2024-04-26 15:47:25.026038] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.992 [2024-04-26 15:47:25.026088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:5194 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.992 [2024-04-26 15:47:25.026103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.992 [2024-04-26 15:47:25.039090] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.992 [2024-04-26 15:47:25.039150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:13701 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.992 [2024-04-26 15:47:25.039166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.992 [2024-04-26 15:47:25.050889] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.992 [2024-04-26 15:47:25.050939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:23544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.992 [2024-04-26 15:47:25.050953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.992 [2024-04-26 15:47:25.064189] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.992 [2024-04-26 15:47:25.064237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:13098 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.992 [2024-04-26 15:47:25.064252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.992 [2024-04-26 15:47:25.078210] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.992 [2024-04-26 15:47:25.078271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:5620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.992 [2024-04-26 15:47:25.078286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.992 [2024-04-26 15:47:25.092821] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.992 [2024-04-26 15:47:25.092912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:4501 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.992 [2024-04-26 15:47:25.092938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.992 [2024-04-26 15:47:25.109128] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.992 [2024-04-26 15:47:25.109197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:25475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.992 [2024-04-26 15:47:25.109214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.992 [2024-04-26 15:47:25.120611] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.992 [2024-04-26 15:47:25.120663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:25461 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.992 [2024-04-26 15:47:25.120685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.992 [2024-04-26 15:47:25.134382] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.992 [2024-04-26 15:47:25.134440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:24054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.992 [2024-04-26 15:47:25.134456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.992 [2024-04-26 15:47:25.148438] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.992 [2024-04-26 15:47:25.148490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.992 [2024-04-26 15:47:25.148505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.992 [2024-04-26 15:47:25.161977] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.992 [2024-04-26 15:47:25.162024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:22057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.992 [2024-04-26 15:47:25.162038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.992 [2024-04-26 15:47:25.176180] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.992 [2024-04-26 15:47:25.176228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.992 [2024-04-26 15:47:25.176242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.992 [2024-04-26 15:47:25.187260] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.992 [2024-04-26 15:47:25.187309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:22244 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.992 [2024-04-26 15:47:25.187323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.993 [2024-04-26 15:47:25.202630] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.993 [2024-04-26 15:47:25.202687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:3193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.993 [2024-04-26 15:47:25.202702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.993 [2024-04-26 15:47:25.217083] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.993 [2024-04-26 15:47:25.217131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11862 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.993 [2024-04-26 15:47:25.217158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.993 [2024-04-26 15:47:25.229095] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.993 [2024-04-26 15:47:25.229161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:8541 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.993 [2024-04-26 15:47:25.229178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.993 [2024-04-26 15:47:25.242309] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.993 [2024-04-26 15:47:25.242357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.993 [2024-04-26 15:47:25.242372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.993 [2024-04-26 15:47:25.255961] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.993 [2024-04-26 15:47:25.256025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:3613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.993 [2024-04-26 15:47:25.256048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.993 [2024-04-26 15:47:25.267752] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.993 [2024-04-26 15:47:25.267810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:6311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.993 [2024-04-26 15:47:25.267825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:54.993 [2024-04-26 15:47:25.281594] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:54.993 [2024-04-26 15:47:25.281655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:54.993 [2024-04-26 15:47:25.281670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.251 [2024-04-26 15:47:25.296311] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:55.251 [2024-04-26 15:47:25.296389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:7104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.251 [2024-04-26 15:47:25.296404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.251 [2024-04-26 15:47:25.308837] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:55.251 [2024-04-26 15:47:25.308903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.251 [2024-04-26 15:47:25.308918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.251 [2024-04-26 15:47:25.322315] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:55.251 [2024-04-26 15:47:25.322377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:19941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.251 [2024-04-26 15:47:25.322392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.251 [2024-04-26 15:47:25.334194] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:55.251 [2024-04-26 15:47:25.334252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.251 [2024-04-26 15:47:25.334267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.251 [2024-04-26 15:47:25.346578] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:55.251 [2024-04-26 15:47:25.346628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14533 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.251 [2024-04-26 15:47:25.346655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.251 [2024-04-26 15:47:25.361002] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:55.251 [2024-04-26 15:47:25.361060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:9205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.251 [2024-04-26 15:47:25.361075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.251 [2024-04-26 15:47:25.374674] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:55.251 [2024-04-26 15:47:25.374734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:5391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.251 [2024-04-26 15:47:25.374749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.252 [2024-04-26 15:47:25.389328] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:55.252 [2024-04-26 15:47:25.389399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.252 [2024-04-26 15:47:25.389414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.252 [2024-04-26 15:47:25.402866] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:55.252 [2024-04-26 15:47:25.402922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:10468 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.252 [2024-04-26 15:47:25.402937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.252 [2024-04-26 15:47:25.414296] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:55.252 [2024-04-26 15:47:25.414345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:14617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.252 [2024-04-26 15:47:25.414359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.252 [2024-04-26 15:47:25.429407] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:55.252 [2024-04-26 15:47:25.429455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.252 [2024-04-26 15:47:25.429470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.252 [2024-04-26 15:47:25.442089] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:55.252 [2024-04-26 15:47:25.442130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:18394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.252 [2024-04-26 15:47:25.442157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.252 [2024-04-26 15:47:25.455273] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:55.252 [2024-04-26 15:47:25.455316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:19105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.252 [2024-04-26 15:47:25.455330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.252 [2024-04-26 15:47:25.469203] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:55.252 [2024-04-26 15:47:25.469255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:16423 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.252 [2024-04-26 15:47:25.469270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.252 [2024-04-26 15:47:25.481982] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:55.252 [2024-04-26 15:47:25.482047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:10236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.252 [2024-04-26 15:47:25.482069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.252 [2024-04-26 15:47:25.493673] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:55.252 [2024-04-26 15:47:25.493723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:16343 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.252 [2024-04-26 15:47:25.493737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.252 [2024-04-26 15:47:25.507595] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:55.252 [2024-04-26 15:47:25.507642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:19886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.252 [2024-04-26 15:47:25.507657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.252 [2024-04-26 15:47:25.521168] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:55.252 [2024-04-26 15:47:25.521214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.252 [2024-04-26 15:47:25.521230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.252 [2024-04-26 15:47:25.533715] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:55.252 [2024-04-26 15:47:25.533767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:19112 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.252 [2024-04-26 15:47:25.533781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.510 [2024-04-26 15:47:25.547178] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:55.510 [2024-04-26 15:47:25.547229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:5478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.510 [2024-04-26 15:47:25.547244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.510 [2024-04-26 15:47:25.561569] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:55.510 [2024-04-26 15:47:25.561616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:21743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.510 [2024-04-26 15:47:25.561637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.510 [2024-04-26 15:47:25.571557] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:55.510 [2024-04-26 15:47:25.571598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:6507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.510 [2024-04-26 15:47:25.571611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.510 [2024-04-26 15:47:25.586563] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:55.510 [2024-04-26 15:47:25.586616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:20747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.510 [2024-04-26 15:47:25.586630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.510 [2024-04-26 15:47:25.601410] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:55.510 [2024-04-26 15:47:25.601457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:5120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.511 [2024-04-26 15:47:25.601472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.511 [2024-04-26 15:47:25.614896] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:55.511 [2024-04-26 15:47:25.614940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:3192 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.511 [2024-04-26 15:47:25.614955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.511 [2024-04-26 15:47:25.626819] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:55.511 [2024-04-26 15:47:25.626860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:18962 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.511 [2024-04-26 15:47:25.626874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.511 [2024-04-26 15:47:25.641558] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:55.511 [2024-04-26 15:47:25.641599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:16147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.511 [2024-04-26 15:47:25.641614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.511 [2024-04-26 15:47:25.655342] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:55.511 [2024-04-26 15:47:25.655383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:3276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.511 [2024-04-26 15:47:25.655397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.511 [2024-04-26 15:47:25.667778] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:55.511 [2024-04-26 15:47:25.667822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:17292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.511 [2024-04-26 15:47:25.667836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.511 [2024-04-26 15:47:25.678648] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:55.511 [2024-04-26 15:47:25.678694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:10624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.511 [2024-04-26 15:47:25.678708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.511 [2024-04-26 15:47:25.693606] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:55.511 [2024-04-26 15:47:25.693667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:6872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.511 [2024-04-26 15:47:25.693681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.511 [2024-04-26 15:47:25.705408] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:55.511 [2024-04-26 15:47:25.705449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:3232 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.511 [2024-04-26 15:47:25.705463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.511 [2024-04-26 15:47:25.719793] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:55.511 [2024-04-26 15:47:25.719838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:25247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.511 [2024-04-26 15:47:25.719852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.511 [2024-04-26 15:47:25.733165] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:55.511 [2024-04-26 15:47:25.733209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:7078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.511 [2024-04-26 15:47:25.733223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.511 [2024-04-26 15:47:25.747260] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:55.511 [2024-04-26 15:47:25.747302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:10921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.511 [2024-04-26 15:47:25.747316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.511 [2024-04-26 15:47:25.760723] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:55.511 [2024-04-26 15:47:25.760769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.511 [2024-04-26 15:47:25.760783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.511 [2024-04-26 15:47:25.773905] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:55.511 [2024-04-26 15:47:25.773955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:4436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.511 [2024-04-26 15:47:25.773971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.511 [2024-04-26 15:47:25.786175] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:55.511 [2024-04-26 15:47:25.786223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:22818 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.511 [2024-04-26 15:47:25.786237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.511 [2024-04-26 15:47:25.802860] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:55.511 [2024-04-26 15:47:25.802934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.511 [2024-04-26 15:47:25.802956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.770 [2024-04-26 15:47:25.817844] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:55.770 [2024-04-26 15:47:25.817901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:7988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.770 [2024-04-26 15:47:25.817922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.770 [2024-04-26 15:47:25.832538] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:55.770 [2024-04-26 15:47:25.832595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:6270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.770 [2024-04-26 15:47:25.832616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.770 [2024-04-26 15:47:25.846495] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:55.770 [2024-04-26 15:47:25.846554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.770 [2024-04-26 15:47:25.846571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.770 [2024-04-26 15:47:25.860104] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:55.770 [2024-04-26 15:47:25.860165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:13447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.770 [2024-04-26 15:47:25.860182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.770 [2024-04-26 15:47:25.875028] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:55.770 [2024-04-26 15:47:25.875082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.770 [2024-04-26 15:47:25.875098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.770 [2024-04-26 15:47:25.888107] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:55.770 [2024-04-26 15:47:25.888166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:16419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.770 [2024-04-26 15:47:25.888181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.770 [2024-04-26 15:47:25.900111] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:55.770 [2024-04-26 15:47:25.900165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:13323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.770 [2024-04-26 15:47:25.900180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.770 [2024-04-26 15:47:25.913174] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:55.770 [2024-04-26 15:47:25.913218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:21019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.770 [2024-04-26 15:47:25.913233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.770 [2024-04-26 15:47:25.927262] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:55.770 [2024-04-26 15:47:25.927310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:17463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.770 [2024-04-26 15:47:25.927325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.770 [2024-04-26 15:47:25.940509] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:55.770 [2024-04-26 15:47:25.940552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:16755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.770 [2024-04-26 15:47:25.940567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.770 [2024-04-26 15:47:25.954256] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:55.770 [2024-04-26 15:47:25.954298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:13351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.770 [2024-04-26 15:47:25.954313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.770 [2024-04-26 15:47:25.967542] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:55.770 [2024-04-26 15:47:25.967588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:24265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.770 [2024-04-26 15:47:25.967603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.771 [2024-04-26 15:47:25.981160] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:55.771 [2024-04-26 15:47:25.981210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:19690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.771 [2024-04-26 15:47:25.981225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.771 [2024-04-26 15:47:25.992610] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:55.771 [2024-04-26 15:47:25.992650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.771 [2024-04-26 15:47:25.992664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.771 [2024-04-26 15:47:26.006841] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:55.771 [2024-04-26 15:47:26.006910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:3320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.771 [2024-04-26 15:47:26.006925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.771 [2024-04-26 15:47:26.018617] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:55.771 [2024-04-26 15:47:26.018678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:24522 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.771 [2024-04-26 15:47:26.018692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.771 [2024-04-26 15:47:26.032262] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:55.771 [2024-04-26 15:47:26.032315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.771 [2024-04-26 15:47:26.032329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.771 [2024-04-26 15:47:26.044893] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:55.771 [2024-04-26 15:47:26.044937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.771 [2024-04-26 15:47:26.044952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:55.771 [2024-04-26 15:47:26.060297] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:55.771 [2024-04-26 15:47:26.060351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:55.771 [2024-04-26 15:47:26.060366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.029 [2024-04-26 15:47:26.074015] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:56.029 [2024-04-26 15:47:26.074059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:16010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.030 [2024-04-26 15:47:26.074073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.030 [2024-04-26 15:47:26.085287] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:56.030 [2024-04-26 15:47:26.085331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:12685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.030 [2024-04-26 15:47:26.085345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.030 [2024-04-26 15:47:26.099125] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:56.030 [2024-04-26 15:47:26.099193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:22339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.030 [2024-04-26 15:47:26.099208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.030 [2024-04-26 15:47:26.114093] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:56.030 [2024-04-26 15:47:26.114161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:1446 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.030 [2024-04-26 15:47:26.114178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.030 [2024-04-26 15:47:26.127499] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:56.030 [2024-04-26 15:47:26.127550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:9893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.030 [2024-04-26 15:47:26.127565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.030 [2024-04-26 15:47:26.141599] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:56.030 [2024-04-26 15:47:26.141678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.030 [2024-04-26 15:47:26.141693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.030 [2024-04-26 15:47:26.155588] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:56.030 [2024-04-26 15:47:26.155653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:8457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.030 [2024-04-26 15:47:26.155668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.030 [2024-04-26 15:47:26.167446] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:56.030 [2024-04-26 15:47:26.167496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:13746 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.030 [2024-04-26 15:47:26.167519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.030 [2024-04-26 15:47:26.181719] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:56.030 [2024-04-26 15:47:26.181782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:21479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.030 [2024-04-26 15:47:26.181797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.030 [2024-04-26 15:47:26.195322] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:56.030 [2024-04-26 15:47:26.195373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:11764 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.030 [2024-04-26 15:47:26.195388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.030 [2024-04-26 15:47:26.208775] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x21d4680) 00:28:56.030 [2024-04-26 15:47:26.208822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.030 [2024-04-26 15:47:26.208835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:56.030 00:28:56.030 Latency(us) 00:28:56.030 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:56.030 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:28:56.030 nvme0n1 : 2.01 19029.21 74.33 0.00 0.00 6717.75 3664.06 18588.39 00:28:56.030 =================================================================================================================== 00:28:56.030 Total : 19029.21 74.33 0.00 0.00 6717.75 3664.06 18588.39 00:28:56.030 0 00:28:56.030 15:47:26 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:28:56.030 15:47:26 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:28:56.030 | .driver_specific 00:28:56.030 | .nvme_error 00:28:56.030 | .status_code 00:28:56.030 | .command_transient_transport_error' 00:28:56.030 15:47:26 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:28:56.030 15:47:26 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:28:56.289 15:47:26 -- host/digest.sh@71 -- # (( 149 > 0 )) 00:28:56.289 15:47:26 -- host/digest.sh@73 -- # killprocess 85847 00:28:56.289 15:47:26 -- common/autotest_common.sh@936 -- # '[' -z 85847 ']' 00:28:56.289 15:47:26 -- common/autotest_common.sh@940 -- # kill -0 85847 00:28:56.289 15:47:26 -- common/autotest_common.sh@941 -- # uname 00:28:56.289 15:47:26 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:28:56.289 15:47:26 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85847 00:28:56.289 15:47:26 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:28:56.289 15:47:26 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:28:56.289 killing process with pid 85847 00:28:56.289 15:47:26 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85847' 00:28:56.289 15:47:26 -- common/autotest_common.sh@955 -- # kill 85847 00:28:56.289 Received shutdown signal, test time was about 2.000000 seconds 00:28:56.289 00:28:56.289 Latency(us) 00:28:56.289 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:56.289 =================================================================================================================== 00:28:56.289 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:56.289 15:47:26 -- common/autotest_common.sh@960 -- # wait 85847 00:28:56.547 15:47:26 -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:28:56.547 15:47:26 -- host/digest.sh@54 -- # local rw bs qd 00:28:56.547 15:47:26 -- host/digest.sh@56 -- # rw=randread 00:28:56.547 15:47:26 -- host/digest.sh@56 -- # bs=131072 00:28:56.547 15:47:26 -- host/digest.sh@56 -- # qd=16 00:28:56.547 15:47:26 -- host/digest.sh@58 -- # bperfpid=85937 00:28:56.547 15:47:26 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:28:56.547 15:47:26 -- host/digest.sh@60 -- # waitforlisten 85937 /var/tmp/bperf.sock 00:28:56.547 15:47:26 -- common/autotest_common.sh@817 -- # '[' -z 85937 ']' 00:28:56.547 15:47:26 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:28:56.547 15:47:26 -- common/autotest_common.sh@822 -- # local max_retries=100 00:28:56.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:28:56.547 15:47:26 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:28:56.547 15:47:26 -- common/autotest_common.sh@826 -- # xtrace_disable 00:28:56.547 15:47:26 -- common/autotest_common.sh@10 -- # set +x 00:28:56.806 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:56.806 Zero copy mechanism will not be used. 00:28:56.806 [2024-04-26 15:47:26.860524] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:28:56.806 [2024-04-26 15:47:26.860617] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85937 ] 00:28:56.806 [2024-04-26 15:47:26.995327] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:57.064 [2024-04-26 15:47:27.115287] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:28:57.630 15:47:27 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:28:57.630 15:47:27 -- common/autotest_common.sh@850 -- # return 0 00:28:57.630 15:47:27 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:57.630 15:47:27 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:28:57.888 15:47:28 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:28:57.888 15:47:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:57.888 15:47:28 -- common/autotest_common.sh@10 -- # set +x 00:28:57.888 15:47:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:57.888 15:47:28 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:57.888 15:47:28 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:28:58.455 nvme0n1 00:28:58.455 15:47:28 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:28:58.455 15:47:28 -- common/autotest_common.sh@549 -- # xtrace_disable 00:28:58.455 15:47:28 -- common/autotest_common.sh@10 -- # set +x 00:28:58.455 15:47:28 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:28:58.455 15:47:28 -- host/digest.sh@69 -- # bperf_py perform_tests 00:28:58.455 15:47:28 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:28:58.455 I/O size of 131072 is greater than zero copy threshold (65536). 00:28:58.455 Zero copy mechanism will not be used. 00:28:58.455 Running I/O for 2 seconds... 00:28:58.455 [2024-04-26 15:47:28.608172] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.455 [2024-04-26 15:47:28.608237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.455 [2024-04-26 15:47:28.608254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.455 [2024-04-26 15:47:28.612876] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.455 [2024-04-26 15:47:28.612922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.455 [2024-04-26 15:47:28.612937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.455 [2024-04-26 15:47:28.617468] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.455 [2024-04-26 15:47:28.617530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.455 [2024-04-26 15:47:28.617544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.455 [2024-04-26 15:47:28.621036] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.455 [2024-04-26 15:47:28.621081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.455 [2024-04-26 15:47:28.621096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.455 [2024-04-26 15:47:28.625427] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.455 [2024-04-26 15:47:28.625469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.455 [2024-04-26 15:47:28.625483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.455 [2024-04-26 15:47:28.629879] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.455 [2024-04-26 15:47:28.629922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.455 [2024-04-26 15:47:28.629936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.455 [2024-04-26 15:47:28.634040] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.455 [2024-04-26 15:47:28.634084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.455 [2024-04-26 15:47:28.634098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.455 [2024-04-26 15:47:28.637548] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.455 [2024-04-26 15:47:28.637589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.455 [2024-04-26 15:47:28.637602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.455 [2024-04-26 15:47:28.642333] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.455 [2024-04-26 15:47:28.642374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.455 [2024-04-26 15:47:28.642388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.455 [2024-04-26 15:47:28.647315] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.455 [2024-04-26 15:47:28.647357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.455 [2024-04-26 15:47:28.647371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.455 [2024-04-26 15:47:28.650483] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.455 [2024-04-26 15:47:28.650522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.455 [2024-04-26 15:47:28.650536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.455 [2024-04-26 15:47:28.654475] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.455 [2024-04-26 15:47:28.654517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.455 [2024-04-26 15:47:28.654531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.455 [2024-04-26 15:47:28.659689] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.455 [2024-04-26 15:47:28.659732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.455 [2024-04-26 15:47:28.659746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.455 [2024-04-26 15:47:28.664272] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.455 [2024-04-26 15:47:28.664312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.455 [2024-04-26 15:47:28.664325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.455 [2024-04-26 15:47:28.667673] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.455 [2024-04-26 15:47:28.667714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.455 [2024-04-26 15:47:28.667727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.455 [2024-04-26 15:47:28.672237] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.455 [2024-04-26 15:47:28.672279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.455 [2024-04-26 15:47:28.672292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.455 [2024-04-26 15:47:28.676488] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.455 [2024-04-26 15:47:28.676528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.455 [2024-04-26 15:47:28.676541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.455 [2024-04-26 15:47:28.680251] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.455 [2024-04-26 15:47:28.680291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.456 [2024-04-26 15:47:28.680304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.456 [2024-04-26 15:47:28.684248] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.456 [2024-04-26 15:47:28.684297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.456 [2024-04-26 15:47:28.684311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.456 [2024-04-26 15:47:28.687500] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.456 [2024-04-26 15:47:28.687541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.456 [2024-04-26 15:47:28.687555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.456 [2024-04-26 15:47:28.691858] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.456 [2024-04-26 15:47:28.691900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.456 [2024-04-26 15:47:28.691913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.456 [2024-04-26 15:47:28.695249] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.456 [2024-04-26 15:47:28.695288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.456 [2024-04-26 15:47:28.695301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.456 [2024-04-26 15:47:28.700232] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.456 [2024-04-26 15:47:28.700273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.456 [2024-04-26 15:47:28.700286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.456 [2024-04-26 15:47:28.704730] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.456 [2024-04-26 15:47:28.704773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.456 [2024-04-26 15:47:28.704786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.456 [2024-04-26 15:47:28.708102] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.456 [2024-04-26 15:47:28.708151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.456 [2024-04-26 15:47:28.708166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.456 [2024-04-26 15:47:28.712548] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.456 [2024-04-26 15:47:28.712590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.456 [2024-04-26 15:47:28.712604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.456 [2024-04-26 15:47:28.716563] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.456 [2024-04-26 15:47:28.716605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.456 [2024-04-26 15:47:28.716618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.456 [2024-04-26 15:47:28.720316] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.456 [2024-04-26 15:47:28.720367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.456 [2024-04-26 15:47:28.720381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.456 [2024-04-26 15:47:28.724699] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.456 [2024-04-26 15:47:28.724741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.456 [2024-04-26 15:47:28.724755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.456 [2024-04-26 15:47:28.728597] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.456 [2024-04-26 15:47:28.728638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.456 [2024-04-26 15:47:28.728651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.456 [2024-04-26 15:47:28.732397] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.456 [2024-04-26 15:47:28.732438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.456 [2024-04-26 15:47:28.732452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.456 [2024-04-26 15:47:28.736599] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.456 [2024-04-26 15:47:28.736639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.456 [2024-04-26 15:47:28.736652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.456 [2024-04-26 15:47:28.740425] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.456 [2024-04-26 15:47:28.740466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.456 [2024-04-26 15:47:28.740480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.456 [2024-04-26 15:47:28.743600] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.456 [2024-04-26 15:47:28.743638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.456 [2024-04-26 15:47:28.743652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.715 [2024-04-26 15:47:28.747782] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.716 [2024-04-26 15:47:28.747824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.716 [2024-04-26 15:47:28.747838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.716 [2024-04-26 15:47:28.752554] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.716 [2024-04-26 15:47:28.752595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.716 [2024-04-26 15:47:28.752622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.716 [2024-04-26 15:47:28.756169] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.716 [2024-04-26 15:47:28.756209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.716 [2024-04-26 15:47:28.756223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.716 [2024-04-26 15:47:28.760072] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.716 [2024-04-26 15:47:28.760113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.716 [2024-04-26 15:47:28.760127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.716 [2024-04-26 15:47:28.764525] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.716 [2024-04-26 15:47:28.764568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.716 [2024-04-26 15:47:28.764581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.716 [2024-04-26 15:47:28.769222] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.716 [2024-04-26 15:47:28.769266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.716 [2024-04-26 15:47:28.769281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.716 [2024-04-26 15:47:28.773179] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.716 [2024-04-26 15:47:28.773221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.716 [2024-04-26 15:47:28.773234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.716 [2024-04-26 15:47:28.776849] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.716 [2024-04-26 15:47:28.776890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.716 [2024-04-26 15:47:28.776904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.716 [2024-04-26 15:47:28.781434] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.716 [2024-04-26 15:47:28.781477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.716 [2024-04-26 15:47:28.781491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.716 [2024-04-26 15:47:28.785484] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.716 [2024-04-26 15:47:28.785527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.716 [2024-04-26 15:47:28.785540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.716 [2024-04-26 15:47:28.789318] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.716 [2024-04-26 15:47:28.789360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.716 [2024-04-26 15:47:28.789373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.716 [2024-04-26 15:47:28.793303] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.716 [2024-04-26 15:47:28.793343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.716 [2024-04-26 15:47:28.793357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.716 [2024-04-26 15:47:28.798042] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.716 [2024-04-26 15:47:28.798084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.716 [2024-04-26 15:47:28.798098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.716 [2024-04-26 15:47:28.801054] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.716 [2024-04-26 15:47:28.801093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.716 [2024-04-26 15:47:28.801106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.716 [2024-04-26 15:47:28.804928] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.716 [2024-04-26 15:47:28.804968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.716 [2024-04-26 15:47:28.804982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.716 [2024-04-26 15:47:28.809860] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.716 [2024-04-26 15:47:28.809903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.716 [2024-04-26 15:47:28.809917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.716 [2024-04-26 15:47:28.812908] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.716 [2024-04-26 15:47:28.812948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.716 [2024-04-26 15:47:28.812962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.716 [2024-04-26 15:47:28.817125] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.716 [2024-04-26 15:47:28.817177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.716 [2024-04-26 15:47:28.817192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.716 [2024-04-26 15:47:28.821025] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.716 [2024-04-26 15:47:28.821068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.716 [2024-04-26 15:47:28.821082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.716 [2024-04-26 15:47:28.824292] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.716 [2024-04-26 15:47:28.824331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.716 [2024-04-26 15:47:28.824355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.716 [2024-04-26 15:47:28.828960] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.716 [2024-04-26 15:47:28.829006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.716 [2024-04-26 15:47:28.829020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.716 [2024-04-26 15:47:28.833567] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.716 [2024-04-26 15:47:28.833614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.716 [2024-04-26 15:47:28.833628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.716 [2024-04-26 15:47:28.837624] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.716 [2024-04-26 15:47:28.837670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.716 [2024-04-26 15:47:28.837685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.716 [2024-04-26 15:47:28.841699] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.716 [2024-04-26 15:47:28.841745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.716 [2024-04-26 15:47:28.841759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.716 [2024-04-26 15:47:28.845372] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.716 [2024-04-26 15:47:28.845414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.716 [2024-04-26 15:47:28.845434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.716 [2024-04-26 15:47:28.850051] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.717 [2024-04-26 15:47:28.850096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.717 [2024-04-26 15:47:28.850110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.717 [2024-04-26 15:47:28.854762] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.717 [2024-04-26 15:47:28.854806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.717 [2024-04-26 15:47:28.854820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.717 [2024-04-26 15:47:28.858147] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.717 [2024-04-26 15:47:28.858186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.717 [2024-04-26 15:47:28.858200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.717 [2024-04-26 15:47:28.862883] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.717 [2024-04-26 15:47:28.862925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.717 [2024-04-26 15:47:28.862939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.717 [2024-04-26 15:47:28.867566] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.717 [2024-04-26 15:47:28.867607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.717 [2024-04-26 15:47:28.867622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.717 [2024-04-26 15:47:28.871820] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.717 [2024-04-26 15:47:28.871864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.717 [2024-04-26 15:47:28.871878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.717 [2024-04-26 15:47:28.874691] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.717 [2024-04-26 15:47:28.874738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.717 [2024-04-26 15:47:28.874752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.717 [2024-04-26 15:47:28.879230] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.717 [2024-04-26 15:47:28.879287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.717 [2024-04-26 15:47:28.879301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.717 [2024-04-26 15:47:28.883716] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.717 [2024-04-26 15:47:28.883759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.717 [2024-04-26 15:47:28.883773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.717 [2024-04-26 15:47:28.888215] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.717 [2024-04-26 15:47:28.888256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.717 [2024-04-26 15:47:28.888270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.717 [2024-04-26 15:47:28.891581] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.717 [2024-04-26 15:47:28.891627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.717 [2024-04-26 15:47:28.891652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.717 [2024-04-26 15:47:28.895711] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.717 [2024-04-26 15:47:28.895756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.717 [2024-04-26 15:47:28.895771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.717 [2024-04-26 15:47:28.900713] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.717 [2024-04-26 15:47:28.900756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.717 [2024-04-26 15:47:28.900769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.717 [2024-04-26 15:47:28.905231] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.717 [2024-04-26 15:47:28.905272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.717 [2024-04-26 15:47:28.905286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.717 [2024-04-26 15:47:28.908184] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.717 [2024-04-26 15:47:28.908221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.717 [2024-04-26 15:47:28.908234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.717 [2024-04-26 15:47:28.912472] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.717 [2024-04-26 15:47:28.912515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.717 [2024-04-26 15:47:28.912529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.717 [2024-04-26 15:47:28.916228] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.717 [2024-04-26 15:47:28.916269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.717 [2024-04-26 15:47:28.916282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.717 [2024-04-26 15:47:28.919599] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.717 [2024-04-26 15:47:28.919640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.717 [2024-04-26 15:47:28.919653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.717 [2024-04-26 15:47:28.923834] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.717 [2024-04-26 15:47:28.923876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.717 [2024-04-26 15:47:28.923889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.717 [2024-04-26 15:47:28.928311] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.717 [2024-04-26 15:47:28.928362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.717 [2024-04-26 15:47:28.928376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.717 [2024-04-26 15:47:28.931803] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.717 [2024-04-26 15:47:28.931843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.717 [2024-04-26 15:47:28.931857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.717 [2024-04-26 15:47:28.935920] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.717 [2024-04-26 15:47:28.935960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.717 [2024-04-26 15:47:28.935974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.717 [2024-04-26 15:47:28.940556] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.717 [2024-04-26 15:47:28.940598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.717 [2024-04-26 15:47:28.940611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.717 [2024-04-26 15:47:28.943936] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.717 [2024-04-26 15:47:28.943979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.717 [2024-04-26 15:47:28.943992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.717 [2024-04-26 15:47:28.948361] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.717 [2024-04-26 15:47:28.948401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.717 [2024-04-26 15:47:28.948415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.717 [2024-04-26 15:47:28.952812] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.717 [2024-04-26 15:47:28.952851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.717 [2024-04-26 15:47:28.952864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.717 [2024-04-26 15:47:28.956626] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.717 [2024-04-26 15:47:28.956665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.717 [2024-04-26 15:47:28.956679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.718 [2024-04-26 15:47:28.960625] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.718 [2024-04-26 15:47:28.960664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.718 [2024-04-26 15:47:28.960682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.718 [2024-04-26 15:47:28.964872] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.718 [2024-04-26 15:47:28.964913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.718 [2024-04-26 15:47:28.964927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.718 [2024-04-26 15:47:28.968295] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.718 [2024-04-26 15:47:28.968343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.718 [2024-04-26 15:47:28.968357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.718 [2024-04-26 15:47:28.972379] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.718 [2024-04-26 15:47:28.972420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.718 [2024-04-26 15:47:28.972434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.718 [2024-04-26 15:47:28.975971] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.718 [2024-04-26 15:47:28.976006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.718 [2024-04-26 15:47:28.976020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.718 [2024-04-26 15:47:28.980491] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.718 [2024-04-26 15:47:28.980530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.718 [2024-04-26 15:47:28.980544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.718 [2024-04-26 15:47:28.984846] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.718 [2024-04-26 15:47:28.984890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.718 [2024-04-26 15:47:28.984904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.718 [2024-04-26 15:47:28.988669] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.718 [2024-04-26 15:47:28.988711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.718 [2024-04-26 15:47:28.988724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.718 [2024-04-26 15:47:28.993210] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.718 [2024-04-26 15:47:28.993275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.718 [2024-04-26 15:47:28.993292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.718 [2024-04-26 15:47:28.997511] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.718 [2024-04-26 15:47:28.997560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.718 [2024-04-26 15:47:28.997575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.718 [2024-04-26 15:47:29.002079] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.718 [2024-04-26 15:47:29.002121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.718 [2024-04-26 15:47:29.002148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.718 [2024-04-26 15:47:29.006845] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.718 [2024-04-26 15:47:29.006889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.718 [2024-04-26 15:47:29.006903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.978 [2024-04-26 15:47:29.010228] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.978 [2024-04-26 15:47:29.010268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.978 [2024-04-26 15:47:29.010281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.978 [2024-04-26 15:47:29.014791] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.978 [2024-04-26 15:47:29.014834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.978 [2024-04-26 15:47:29.014847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.978 [2024-04-26 15:47:29.018388] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.978 [2024-04-26 15:47:29.018428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.978 [2024-04-26 15:47:29.018442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.978 [2024-04-26 15:47:29.022766] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.978 [2024-04-26 15:47:29.022807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.978 [2024-04-26 15:47:29.022821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.978 [2024-04-26 15:47:29.028206] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.978 [2024-04-26 15:47:29.028255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.978 [2024-04-26 15:47:29.028269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.978 [2024-04-26 15:47:29.031214] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.978 [2024-04-26 15:47:29.031251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.978 [2024-04-26 15:47:29.031265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.978 [2024-04-26 15:47:29.035746] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.978 [2024-04-26 15:47:29.035788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.978 [2024-04-26 15:47:29.035802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.978 [2024-04-26 15:47:29.040206] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.978 [2024-04-26 15:47:29.040258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.978 [2024-04-26 15:47:29.040272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.978 [2024-04-26 15:47:29.043658] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.978 [2024-04-26 15:47:29.043698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.978 [2024-04-26 15:47:29.043711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.978 [2024-04-26 15:47:29.047125] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.978 [2024-04-26 15:47:29.047181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.978 [2024-04-26 15:47:29.047195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.978 [2024-04-26 15:47:29.051354] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.978 [2024-04-26 15:47:29.051398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.978 [2024-04-26 15:47:29.051411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.978 [2024-04-26 15:47:29.054845] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.978 [2024-04-26 15:47:29.054885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.978 [2024-04-26 15:47:29.054898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.978 [2024-04-26 15:47:29.059311] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.978 [2024-04-26 15:47:29.059353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.978 [2024-04-26 15:47:29.059367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.978 [2024-04-26 15:47:29.063291] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.978 [2024-04-26 15:47:29.063342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.978 [2024-04-26 15:47:29.063355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.978 [2024-04-26 15:47:29.067294] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.978 [2024-04-26 15:47:29.067335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.978 [2024-04-26 15:47:29.067349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.978 [2024-04-26 15:47:29.072004] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.978 [2024-04-26 15:47:29.072044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.978 [2024-04-26 15:47:29.072058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.978 [2024-04-26 15:47:29.075447] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.978 [2024-04-26 15:47:29.075488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.978 [2024-04-26 15:47:29.075501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.978 [2024-04-26 15:47:29.079433] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.978 [2024-04-26 15:47:29.079475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.978 [2024-04-26 15:47:29.079488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.978 [2024-04-26 15:47:29.083255] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.979 [2024-04-26 15:47:29.083297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.979 [2024-04-26 15:47:29.083311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.979 [2024-04-26 15:47:29.087472] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.979 [2024-04-26 15:47:29.087514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.979 [2024-04-26 15:47:29.087528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.979 [2024-04-26 15:47:29.091602] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.979 [2024-04-26 15:47:29.091648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.979 [2024-04-26 15:47:29.091662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.979 [2024-04-26 15:47:29.095791] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.979 [2024-04-26 15:47:29.095830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.979 [2024-04-26 15:47:29.095844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.979 [2024-04-26 15:47:29.100163] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.979 [2024-04-26 15:47:29.100204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.979 [2024-04-26 15:47:29.100218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.979 [2024-04-26 15:47:29.104200] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.979 [2024-04-26 15:47:29.104241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.979 [2024-04-26 15:47:29.104254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.979 [2024-04-26 15:47:29.108965] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.979 [2024-04-26 15:47:29.109006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.979 [2024-04-26 15:47:29.109019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.979 [2024-04-26 15:47:29.111742] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.979 [2024-04-26 15:47:29.111780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.979 [2024-04-26 15:47:29.111793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.979 [2024-04-26 15:47:29.116703] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.979 [2024-04-26 15:47:29.116748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.979 [2024-04-26 15:47:29.116762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.979 [2024-04-26 15:47:29.120953] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.979 [2024-04-26 15:47:29.120991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.979 [2024-04-26 15:47:29.121004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.979 [2024-04-26 15:47:29.125791] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.979 [2024-04-26 15:47:29.125830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.979 [2024-04-26 15:47:29.125844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.979 [2024-04-26 15:47:29.129401] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.979 [2024-04-26 15:47:29.129441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.979 [2024-04-26 15:47:29.129454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.979 [2024-04-26 15:47:29.133226] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.979 [2024-04-26 15:47:29.133276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.979 [2024-04-26 15:47:29.133289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.979 [2024-04-26 15:47:29.137930] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.979 [2024-04-26 15:47:29.137969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.979 [2024-04-26 15:47:29.137991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.979 [2024-04-26 15:47:29.141109] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.979 [2024-04-26 15:47:29.141161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.979 [2024-04-26 15:47:29.141175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.979 [2024-04-26 15:47:29.145520] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.979 [2024-04-26 15:47:29.145559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.979 [2024-04-26 15:47:29.145573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.979 [2024-04-26 15:47:29.150272] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.979 [2024-04-26 15:47:29.150311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.979 [2024-04-26 15:47:29.150324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.979 [2024-04-26 15:47:29.153919] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.979 [2024-04-26 15:47:29.153957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.979 [2024-04-26 15:47:29.153970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.979 [2024-04-26 15:47:29.157265] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.979 [2024-04-26 15:47:29.157307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.979 [2024-04-26 15:47:29.157320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.979 [2024-04-26 15:47:29.161393] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.979 [2024-04-26 15:47:29.161431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.979 [2024-04-26 15:47:29.161444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.979 [2024-04-26 15:47:29.165611] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.979 [2024-04-26 15:47:29.165651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.979 [2024-04-26 15:47:29.165664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.979 [2024-04-26 15:47:29.170288] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.979 [2024-04-26 15:47:29.170330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.979 [2024-04-26 15:47:29.170343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.979 [2024-04-26 15:47:29.172894] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.979 [2024-04-26 15:47:29.172932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.979 [2024-04-26 15:47:29.172946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.980 [2024-04-26 15:47:29.177774] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.980 [2024-04-26 15:47:29.177813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.980 [2024-04-26 15:47:29.177827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.980 [2024-04-26 15:47:29.180932] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.980 [2024-04-26 15:47:29.180976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.980 [2024-04-26 15:47:29.180990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.980 [2024-04-26 15:47:29.185636] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.980 [2024-04-26 15:47:29.185676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.980 [2024-04-26 15:47:29.185690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.980 [2024-04-26 15:47:29.190115] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.980 [2024-04-26 15:47:29.190169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.980 [2024-04-26 15:47:29.190183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.980 [2024-04-26 15:47:29.193442] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.980 [2024-04-26 15:47:29.193482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.980 [2024-04-26 15:47:29.193497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.980 [2024-04-26 15:47:29.198404] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.980 [2024-04-26 15:47:29.198444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.980 [2024-04-26 15:47:29.198458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.980 [2024-04-26 15:47:29.202906] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.980 [2024-04-26 15:47:29.202947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.980 [2024-04-26 15:47:29.202961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.980 [2024-04-26 15:47:29.207723] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.980 [2024-04-26 15:47:29.207763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.980 [2024-04-26 15:47:29.207777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.980 [2024-04-26 15:47:29.210527] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.980 [2024-04-26 15:47:29.210565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.980 [2024-04-26 15:47:29.210578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.980 [2024-04-26 15:47:29.214790] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.980 [2024-04-26 15:47:29.214830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.980 [2024-04-26 15:47:29.214843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.980 [2024-04-26 15:47:29.219382] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.980 [2024-04-26 15:47:29.219421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.980 [2024-04-26 15:47:29.219436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.980 [2024-04-26 15:47:29.224615] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.980 [2024-04-26 15:47:29.224657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.980 [2024-04-26 15:47:29.224671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.980 [2024-04-26 15:47:29.228499] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.980 [2024-04-26 15:47:29.228538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.980 [2024-04-26 15:47:29.228552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.980 [2024-04-26 15:47:29.231644] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.980 [2024-04-26 15:47:29.231685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.980 [2024-04-26 15:47:29.231698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.980 [2024-04-26 15:47:29.236410] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.980 [2024-04-26 15:47:29.236451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.980 [2024-04-26 15:47:29.236465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.980 [2024-04-26 15:47:29.241082] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.980 [2024-04-26 15:47:29.241124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.980 [2024-04-26 15:47:29.241150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.980 [2024-04-26 15:47:29.244498] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.980 [2024-04-26 15:47:29.244538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.980 [2024-04-26 15:47:29.244551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.980 [2024-04-26 15:47:29.248148] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.980 [2024-04-26 15:47:29.248193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.980 [2024-04-26 15:47:29.248207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:58.980 [2024-04-26 15:47:29.253116] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.980 [2024-04-26 15:47:29.253167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.980 [2024-04-26 15:47:29.253181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:58.980 [2024-04-26 15:47:29.258207] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.980 [2024-04-26 15:47:29.258247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.980 [2024-04-26 15:47:29.258260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:58.980 [2024-04-26 15:47:29.261680] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.980 [2024-04-26 15:47:29.261717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.980 [2024-04-26 15:47:29.261731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:58.980 [2024-04-26 15:47:29.266220] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:58.980 [2024-04-26 15:47:29.266260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:58.980 [2024-04-26 15:47:29.266274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:59.240 [2024-04-26 15:47:29.271330] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.240 [2024-04-26 15:47:29.271372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.240 [2024-04-26 15:47:29.271386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:59.240 [2024-04-26 15:47:29.274309] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.240 [2024-04-26 15:47:29.274348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.240 [2024-04-26 15:47:29.274361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:59.240 [2024-04-26 15:47:29.279238] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.240 [2024-04-26 15:47:29.279278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.240 [2024-04-26 15:47:29.279292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.240 [2024-04-26 15:47:29.282264] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.240 [2024-04-26 15:47:29.282314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.240 [2024-04-26 15:47:29.282328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:59.240 [2024-04-26 15:47:29.286642] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.240 [2024-04-26 15:47:29.286683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.240 [2024-04-26 15:47:29.286696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:59.240 [2024-04-26 15:47:29.290731] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.240 [2024-04-26 15:47:29.290770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.240 [2024-04-26 15:47:29.290783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:59.240 [2024-04-26 15:47:29.295406] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.240 [2024-04-26 15:47:29.295446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.240 [2024-04-26 15:47:29.295459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.240 [2024-04-26 15:47:29.298440] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.240 [2024-04-26 15:47:29.298478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.240 [2024-04-26 15:47:29.298492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:59.240 [2024-04-26 15:47:29.302995] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.240 [2024-04-26 15:47:29.303036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.240 [2024-04-26 15:47:29.303049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:59.240 [2024-04-26 15:47:29.307784] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.240 [2024-04-26 15:47:29.307824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.240 [2024-04-26 15:47:29.307837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:59.241 [2024-04-26 15:47:29.311813] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.241 [2024-04-26 15:47:29.311853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.241 [2024-04-26 15:47:29.311867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.241 [2024-04-26 15:47:29.315692] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.241 [2024-04-26 15:47:29.315732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.241 [2024-04-26 15:47:29.315745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:59.241 [2024-04-26 15:47:29.319348] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.241 [2024-04-26 15:47:29.319392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.241 [2024-04-26 15:47:29.319406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:59.241 [2024-04-26 15:47:29.323545] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.241 [2024-04-26 15:47:29.323586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.241 [2024-04-26 15:47:29.323601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:59.241 [2024-04-26 15:47:29.327236] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.241 [2024-04-26 15:47:29.327275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.241 [2024-04-26 15:47:29.327289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.241 [2024-04-26 15:47:29.331098] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.241 [2024-04-26 15:47:29.331150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.241 [2024-04-26 15:47:29.331164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:59.241 [2024-04-26 15:47:29.335758] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.241 [2024-04-26 15:47:29.335799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.241 [2024-04-26 15:47:29.335812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:59.241 [2024-04-26 15:47:29.340913] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.241 [2024-04-26 15:47:29.340954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.241 [2024-04-26 15:47:29.340968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:59.241 [2024-04-26 15:47:29.345908] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.241 [2024-04-26 15:47:29.345949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.241 [2024-04-26 15:47:29.345962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.241 [2024-04-26 15:47:29.348607] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.241 [2024-04-26 15:47:29.348643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.241 [2024-04-26 15:47:29.348655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:59.241 [2024-04-26 15:47:29.353485] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.241 [2024-04-26 15:47:29.353525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.241 [2024-04-26 15:47:29.353538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:59.241 [2024-04-26 15:47:29.357733] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.241 [2024-04-26 15:47:29.357773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.241 [2024-04-26 15:47:29.357787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:59.241 [2024-04-26 15:47:29.361189] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.241 [2024-04-26 15:47:29.361227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.241 [2024-04-26 15:47:29.361240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.241 [2024-04-26 15:47:29.365704] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.241 [2024-04-26 15:47:29.365743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.241 [2024-04-26 15:47:29.365757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:59.241 [2024-04-26 15:47:29.370788] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.241 [2024-04-26 15:47:29.370827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.241 [2024-04-26 15:47:29.370840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:59.241 [2024-04-26 15:47:29.375082] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.241 [2024-04-26 15:47:29.375122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.241 [2024-04-26 15:47:29.375148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:59.241 [2024-04-26 15:47:29.378208] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.241 [2024-04-26 15:47:29.378247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.241 [2024-04-26 15:47:29.378260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.241 [2024-04-26 15:47:29.383130] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.241 [2024-04-26 15:47:29.383180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.241 [2024-04-26 15:47:29.383194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:59.241 [2024-04-26 15:47:29.387600] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.241 [2024-04-26 15:47:29.387640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.241 [2024-04-26 15:47:29.387663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:59.241 [2024-04-26 15:47:29.391829] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.241 [2024-04-26 15:47:29.391868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.241 [2024-04-26 15:47:29.391881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:59.241 [2024-04-26 15:47:29.395161] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.241 [2024-04-26 15:47:29.395198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.241 [2024-04-26 15:47:29.395211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.241 [2024-04-26 15:47:29.399982] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.242 [2024-04-26 15:47:29.400022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.242 [2024-04-26 15:47:29.400036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:59.242 [2024-04-26 15:47:29.405038] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.242 [2024-04-26 15:47:29.405078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.242 [2024-04-26 15:47:29.405092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:59.242 [2024-04-26 15:47:29.410027] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.242 [2024-04-26 15:47:29.410073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.242 [2024-04-26 15:47:29.410087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:59.242 [2024-04-26 15:47:29.412791] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.242 [2024-04-26 15:47:29.412829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.242 [2024-04-26 15:47:29.412843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.242 [2024-04-26 15:47:29.417946] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.242 [2024-04-26 15:47:29.417988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.242 [2024-04-26 15:47:29.418001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:59.242 [2024-04-26 15:47:29.423327] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.242 [2024-04-26 15:47:29.423366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.242 [2024-04-26 15:47:29.423380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:59.242 [2024-04-26 15:47:29.426845] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.242 [2024-04-26 15:47:29.426883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.242 [2024-04-26 15:47:29.426896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:59.242 [2024-04-26 15:47:29.431493] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.242 [2024-04-26 15:47:29.431537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.242 [2024-04-26 15:47:29.431564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.242 [2024-04-26 15:47:29.436109] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.242 [2024-04-26 15:47:29.436163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.242 [2024-04-26 15:47:29.436177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:59.242 [2024-04-26 15:47:29.439246] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.242 [2024-04-26 15:47:29.439284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.242 [2024-04-26 15:47:29.439298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:59.242 [2024-04-26 15:47:29.443495] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.242 [2024-04-26 15:47:29.443535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.242 [2024-04-26 15:47:29.443548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:59.242 [2024-04-26 15:47:29.448892] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.242 [2024-04-26 15:47:29.448932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.242 [2024-04-26 15:47:29.448946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.242 [2024-04-26 15:47:29.453445] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.242 [2024-04-26 15:47:29.453485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.242 [2024-04-26 15:47:29.453500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:59.242 [2024-04-26 15:47:29.456552] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.242 [2024-04-26 15:47:29.456594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.242 [2024-04-26 15:47:29.456608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:59.242 [2024-04-26 15:47:29.461098] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.242 [2024-04-26 15:47:29.461155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.242 [2024-04-26 15:47:29.461170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:59.242 [2024-04-26 15:47:29.465697] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.242 [2024-04-26 15:47:29.465737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.242 [2024-04-26 15:47:29.465751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.242 [2024-04-26 15:47:29.470149] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.242 [2024-04-26 15:47:29.470187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.242 [2024-04-26 15:47:29.470200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:59.242 [2024-04-26 15:47:29.475395] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.242 [2024-04-26 15:47:29.475435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.242 [2024-04-26 15:47:29.475448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:59.242 [2024-04-26 15:47:29.478938] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.242 [2024-04-26 15:47:29.478977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.242 [2024-04-26 15:47:29.478991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:59.242 [2024-04-26 15:47:29.482677] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.242 [2024-04-26 15:47:29.482715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.242 [2024-04-26 15:47:29.482729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.242 [2024-04-26 15:47:29.488045] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.242 [2024-04-26 15:47:29.488094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.242 [2024-04-26 15:47:29.488107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:59.242 [2024-04-26 15:47:29.491213] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.242 [2024-04-26 15:47:29.491249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.242 [2024-04-26 15:47:29.491263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:59.242 [2024-04-26 15:47:29.495465] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.242 [2024-04-26 15:47:29.495504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.243 [2024-04-26 15:47:29.495517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:59.243 [2024-04-26 15:47:29.500314] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.243 [2024-04-26 15:47:29.500372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.243 [2024-04-26 15:47:29.500386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.243 [2024-04-26 15:47:29.504644] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.243 [2024-04-26 15:47:29.504683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.243 [2024-04-26 15:47:29.504696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:59.243 [2024-04-26 15:47:29.507718] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.243 [2024-04-26 15:47:29.507757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.243 [2024-04-26 15:47:29.507771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:59.243 [2024-04-26 15:47:29.512699] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.243 [2024-04-26 15:47:29.512743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.243 [2024-04-26 15:47:29.512757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:59.243 [2024-04-26 15:47:29.517056] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.243 [2024-04-26 15:47:29.517095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.243 [2024-04-26 15:47:29.517108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.243 [2024-04-26 15:47:29.520502] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.243 [2024-04-26 15:47:29.520540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.243 [2024-04-26 15:47:29.520554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:59.243 [2024-04-26 15:47:29.524854] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.243 [2024-04-26 15:47:29.524893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.243 [2024-04-26 15:47:29.524907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:59.243 [2024-04-26 15:47:29.529648] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.243 [2024-04-26 15:47:29.529687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.243 [2024-04-26 15:47:29.529700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:59.502 [2024-04-26 15:47:29.534313] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.502 [2024-04-26 15:47:29.534354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.502 [2024-04-26 15:47:29.534368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.502 [2024-04-26 15:47:29.536982] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.502 [2024-04-26 15:47:29.537018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.502 [2024-04-26 15:47:29.537032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:59.502 [2024-04-26 15:47:29.541746] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.502 [2024-04-26 15:47:29.541788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.502 [2024-04-26 15:47:29.541801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:59.502 [2024-04-26 15:47:29.546432] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.502 [2024-04-26 15:47:29.546472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.502 [2024-04-26 15:47:29.546486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:59.502 [2024-04-26 15:47:29.549920] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.502 [2024-04-26 15:47:29.549960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.502 [2024-04-26 15:47:29.549973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.502 [2024-04-26 15:47:29.553930] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.502 [2024-04-26 15:47:29.553972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.502 [2024-04-26 15:47:29.553986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:59.502 [2024-04-26 15:47:29.558533] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.502 [2024-04-26 15:47:29.558582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.502 [2024-04-26 15:47:29.558596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:59.502 [2024-04-26 15:47:29.563004] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.502 [2024-04-26 15:47:29.563045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.502 [2024-04-26 15:47:29.563058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:59.502 [2024-04-26 15:47:29.566305] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.502 [2024-04-26 15:47:29.566344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.502 [2024-04-26 15:47:29.566357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.502 [2024-04-26 15:47:29.570572] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.502 [2024-04-26 15:47:29.570611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.502 [2024-04-26 15:47:29.570624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:59.502 [2024-04-26 15:47:29.575325] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.502 [2024-04-26 15:47:29.575364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.502 [2024-04-26 15:47:29.575378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:59.502 [2024-04-26 15:47:29.579707] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.502 [2024-04-26 15:47:29.579746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.502 [2024-04-26 15:47:29.579760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:59.502 [2024-04-26 15:47:29.583238] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.502 [2024-04-26 15:47:29.583278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.502 [2024-04-26 15:47:29.583291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.502 [2024-04-26 15:47:29.587614] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.502 [2024-04-26 15:47:29.587657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.502 [2024-04-26 15:47:29.587671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:59.502 [2024-04-26 15:47:29.591048] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.502 [2024-04-26 15:47:29.591089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.502 [2024-04-26 15:47:29.591103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:59.502 [2024-04-26 15:47:29.595348] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.502 [2024-04-26 15:47:29.595388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.502 [2024-04-26 15:47:29.595401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:59.502 [2024-04-26 15:47:29.599685] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.502 [2024-04-26 15:47:29.599725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.502 [2024-04-26 15:47:29.599739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.502 [2024-04-26 15:47:29.603444] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.502 [2024-04-26 15:47:29.603483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.502 [2024-04-26 15:47:29.603497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:59.502 [2024-04-26 15:47:29.607170] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.502 [2024-04-26 15:47:29.607208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.502 [2024-04-26 15:47:29.607222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:59.502 [2024-04-26 15:47:29.611520] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.502 [2024-04-26 15:47:29.611561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.502 [2024-04-26 15:47:29.611574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:59.502 [2024-04-26 15:47:29.615641] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.502 [2024-04-26 15:47:29.615681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.502 [2024-04-26 15:47:29.615695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.502 [2024-04-26 15:47:29.619995] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.502 [2024-04-26 15:47:29.620042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.502 [2024-04-26 15:47:29.620055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:59.502 [2024-04-26 15:47:29.624122] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.502 [2024-04-26 15:47:29.624174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.502 [2024-04-26 15:47:29.624188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:59.502 [2024-04-26 15:47:29.628369] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.502 [2024-04-26 15:47:29.628409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.502 [2024-04-26 15:47:29.628423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:59.502 [2024-04-26 15:47:29.632178] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.502 [2024-04-26 15:47:29.632216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.502 [2024-04-26 15:47:29.632229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.502 [2024-04-26 15:47:29.636082] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.502 [2024-04-26 15:47:29.636121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.502 [2024-04-26 15:47:29.636150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:59.502 [2024-04-26 15:47:29.640511] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.502 [2024-04-26 15:47:29.640551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.502 [2024-04-26 15:47:29.640564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:59.502 [2024-04-26 15:47:29.644646] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.502 [2024-04-26 15:47:29.644685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.503 [2024-04-26 15:47:29.644699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:59.503 [2024-04-26 15:47:29.648318] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.503 [2024-04-26 15:47:29.648368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.503 [2024-04-26 15:47:29.648382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.503 [2024-04-26 15:47:29.652611] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.503 [2024-04-26 15:47:29.652651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.503 [2024-04-26 15:47:29.652664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:59.503 [2024-04-26 15:47:29.656611] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.503 [2024-04-26 15:47:29.656662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.503 [2024-04-26 15:47:29.656675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:59.503 [2024-04-26 15:47:29.660815] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.503 [2024-04-26 15:47:29.660856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.503 [2024-04-26 15:47:29.660870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:59.503 [2024-04-26 15:47:29.664165] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.503 [2024-04-26 15:47:29.664199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.503 [2024-04-26 15:47:29.664212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.503 [2024-04-26 15:47:29.668040] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.503 [2024-04-26 15:47:29.668089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.503 [2024-04-26 15:47:29.668102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:59.503 [2024-04-26 15:47:29.672259] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.503 [2024-04-26 15:47:29.672301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.503 [2024-04-26 15:47:29.672314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:59.503 [2024-04-26 15:47:29.675605] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.503 [2024-04-26 15:47:29.675644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.503 [2024-04-26 15:47:29.675657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:59.503 [2024-04-26 15:47:29.680315] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.503 [2024-04-26 15:47:29.680364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.503 [2024-04-26 15:47:29.680384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.503 [2024-04-26 15:47:29.685276] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.503 [2024-04-26 15:47:29.685317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.503 [2024-04-26 15:47:29.685330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:59.503 [2024-04-26 15:47:29.689420] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.503 [2024-04-26 15:47:29.689459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.503 [2024-04-26 15:47:29.689473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:59.503 [2024-04-26 15:47:29.692652] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.503 [2024-04-26 15:47:29.692701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.503 [2024-04-26 15:47:29.692714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:59.503 [2024-04-26 15:47:29.697602] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.503 [2024-04-26 15:47:29.697655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.503 [2024-04-26 15:47:29.697669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.503 [2024-04-26 15:47:29.702487] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.503 [2024-04-26 15:47:29.702528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.503 [2024-04-26 15:47:29.702542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:59.503 [2024-04-26 15:47:29.706853] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.503 [2024-04-26 15:47:29.706898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.503 [2024-04-26 15:47:29.706912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:59.503 [2024-04-26 15:47:29.710479] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.503 [2024-04-26 15:47:29.710525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.503 [2024-04-26 15:47:29.710539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:59.503 [2024-04-26 15:47:29.714934] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.503 [2024-04-26 15:47:29.714981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.503 [2024-04-26 15:47:29.714995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.503 [2024-04-26 15:47:29.720045] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.503 [2024-04-26 15:47:29.720088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.503 [2024-04-26 15:47:29.720102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:59.503 [2024-04-26 15:47:29.723496] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.503 [2024-04-26 15:47:29.723535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.503 [2024-04-26 15:47:29.723549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:59.503 [2024-04-26 15:47:29.727554] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.503 [2024-04-26 15:47:29.727606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.503 [2024-04-26 15:47:29.727620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:59.503 [2024-04-26 15:47:29.732081] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.503 [2024-04-26 15:47:29.732121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.503 [2024-04-26 15:47:29.732148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.503 [2024-04-26 15:47:29.736088] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.503 [2024-04-26 15:47:29.736128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.503 [2024-04-26 15:47:29.736157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:59.503 [2024-04-26 15:47:29.740234] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.503 [2024-04-26 15:47:29.740273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.503 [2024-04-26 15:47:29.740286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:59.503 [2024-04-26 15:47:29.744671] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.503 [2024-04-26 15:47:29.744724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.503 [2024-04-26 15:47:29.744741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:59.503 [2024-04-26 15:47:29.748528] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.503 [2024-04-26 15:47:29.748568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.503 [2024-04-26 15:47:29.748582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.503 [2024-04-26 15:47:29.753461] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.503 [2024-04-26 15:47:29.753503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.503 [2024-04-26 15:47:29.753517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:59.503 [2024-04-26 15:47:29.757055] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.503 [2024-04-26 15:47:29.757090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.504 [2024-04-26 15:47:29.757104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:59.504 [2024-04-26 15:47:29.761020] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.504 [2024-04-26 15:47:29.761060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.504 [2024-04-26 15:47:29.761073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:59.504 [2024-04-26 15:47:29.766473] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.504 [2024-04-26 15:47:29.766513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.504 [2024-04-26 15:47:29.766526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.504 [2024-04-26 15:47:29.769964] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.504 [2024-04-26 15:47:29.770002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.504 [2024-04-26 15:47:29.770016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:59.504 [2024-04-26 15:47:29.774436] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.504 [2024-04-26 15:47:29.774476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.504 [2024-04-26 15:47:29.774490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:59.504 [2024-04-26 15:47:29.778829] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.504 [2024-04-26 15:47:29.778869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.504 [2024-04-26 15:47:29.778882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:59.504 [2024-04-26 15:47:29.782250] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.504 [2024-04-26 15:47:29.782289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.504 [2024-04-26 15:47:29.782304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.504 [2024-04-26 15:47:29.786661] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.504 [2024-04-26 15:47:29.786702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.504 [2024-04-26 15:47:29.786716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:59.504 [2024-04-26 15:47:29.790692] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.504 [2024-04-26 15:47:29.790732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.504 [2024-04-26 15:47:29.790746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:59.763 [2024-04-26 15:47:29.795071] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.763 [2024-04-26 15:47:29.795113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.763 [2024-04-26 15:47:29.795128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:59.763 [2024-04-26 15:47:29.799505] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.763 [2024-04-26 15:47:29.799545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.763 [2024-04-26 15:47:29.799559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.763 [2024-04-26 15:47:29.803208] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.763 [2024-04-26 15:47:29.803249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.763 [2024-04-26 15:47:29.803262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:59.763 [2024-04-26 15:47:29.807570] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.763 [2024-04-26 15:47:29.807610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.763 [2024-04-26 15:47:29.807623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:59.763 [2024-04-26 15:47:29.812785] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.763 [2024-04-26 15:47:29.812828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.763 [2024-04-26 15:47:29.812842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:59.763 [2024-04-26 15:47:29.817503] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.763 [2024-04-26 15:47:29.817543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.763 [2024-04-26 15:47:29.817557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.763 [2024-04-26 15:47:29.821024] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.763 [2024-04-26 15:47:29.821064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.763 [2024-04-26 15:47:29.821078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:59.763 [2024-04-26 15:47:29.825527] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.763 [2024-04-26 15:47:29.825568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.763 [2024-04-26 15:47:29.825582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:59.763 [2024-04-26 15:47:29.830210] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.763 [2024-04-26 15:47:29.830249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.763 [2024-04-26 15:47:29.830262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:59.763 [2024-04-26 15:47:29.834627] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.763 [2024-04-26 15:47:29.834669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.763 [2024-04-26 15:47:29.834684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.763 [2024-04-26 15:47:29.837857] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.763 [2024-04-26 15:47:29.837897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.763 [2024-04-26 15:47:29.837910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:59.763 [2024-04-26 15:47:29.842260] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.763 [2024-04-26 15:47:29.842301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.763 [2024-04-26 15:47:29.842315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:59.763 [2024-04-26 15:47:29.847316] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.763 [2024-04-26 15:47:29.847363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.763 [2024-04-26 15:47:29.847377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:59.763 [2024-04-26 15:47:29.850529] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.763 [2024-04-26 15:47:29.850568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.763 [2024-04-26 15:47:29.850581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.763 [2024-04-26 15:47:29.854325] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.764 [2024-04-26 15:47:29.854365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.764 [2024-04-26 15:47:29.854379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:59.764 [2024-04-26 15:47:29.858419] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.764 [2024-04-26 15:47:29.858461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.764 [2024-04-26 15:47:29.858474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:59.764 [2024-04-26 15:47:29.863496] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.764 [2024-04-26 15:47:29.863541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.764 [2024-04-26 15:47:29.863555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:59.764 [2024-04-26 15:47:29.866646] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.764 [2024-04-26 15:47:29.866687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.764 [2024-04-26 15:47:29.866700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.764 [2024-04-26 15:47:29.871174] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.764 [2024-04-26 15:47:29.871215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.764 [2024-04-26 15:47:29.871229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:59.764 [2024-04-26 15:47:29.875561] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.764 [2024-04-26 15:47:29.875601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.764 [2024-04-26 15:47:29.875615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:59.764 [2024-04-26 15:47:29.880069] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.764 [2024-04-26 15:47:29.880110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.764 [2024-04-26 15:47:29.880123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:59.764 [2024-04-26 15:47:29.885168] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.764 [2024-04-26 15:47:29.885206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.764 [2024-04-26 15:47:29.885220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.764 [2024-04-26 15:47:29.889113] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.764 [2024-04-26 15:47:29.889165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.764 [2024-04-26 15:47:29.889179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:59.764 [2024-04-26 15:47:29.893325] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.764 [2024-04-26 15:47:29.893369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.764 [2024-04-26 15:47:29.893382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:59.764 [2024-04-26 15:47:29.897108] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.764 [2024-04-26 15:47:29.897162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.764 [2024-04-26 15:47:29.897176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:59.764 [2024-04-26 15:47:29.901733] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.764 [2024-04-26 15:47:29.901773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.764 [2024-04-26 15:47:29.901786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.764 [2024-04-26 15:47:29.905385] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.764 [2024-04-26 15:47:29.905426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.764 [2024-04-26 15:47:29.905439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:59.764 [2024-04-26 15:47:29.909372] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.764 [2024-04-26 15:47:29.909415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.764 [2024-04-26 15:47:29.909428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:59.764 [2024-04-26 15:47:29.914098] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.764 [2024-04-26 15:47:29.914151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.764 [2024-04-26 15:47:29.914167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:59.764 [2024-04-26 15:47:29.918687] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.764 [2024-04-26 15:47:29.918726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.764 [2024-04-26 15:47:29.918740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.764 [2024-04-26 15:47:29.922879] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.764 [2024-04-26 15:47:29.922918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.764 [2024-04-26 15:47:29.922932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:59.764 [2024-04-26 15:47:29.925758] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.764 [2024-04-26 15:47:29.925795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.764 [2024-04-26 15:47:29.925808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:59.764 [2024-04-26 15:47:29.930393] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.764 [2024-04-26 15:47:29.930437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.764 [2024-04-26 15:47:29.930451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:59.764 [2024-04-26 15:47:29.934109] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.764 [2024-04-26 15:47:29.934164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.764 [2024-04-26 15:47:29.934178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.764 [2024-04-26 15:47:29.937799] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.764 [2024-04-26 15:47:29.937838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.764 [2024-04-26 15:47:29.937852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:59.764 [2024-04-26 15:47:29.941950] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.764 [2024-04-26 15:47:29.941990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.764 [2024-04-26 15:47:29.942004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:59.764 [2024-04-26 15:47:29.945623] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.764 [2024-04-26 15:47:29.945662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.764 [2024-04-26 15:47:29.945675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:59.764 [2024-04-26 15:47:29.949759] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.764 [2024-04-26 15:47:29.949798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.764 [2024-04-26 15:47:29.949811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.764 [2024-04-26 15:47:29.954499] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.764 [2024-04-26 15:47:29.954538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.764 [2024-04-26 15:47:29.954551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:59.764 [2024-04-26 15:47:29.957621] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.764 [2024-04-26 15:47:29.957661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.764 [2024-04-26 15:47:29.957675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:59.764 [2024-04-26 15:47:29.962890] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.764 [2024-04-26 15:47:29.962934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.764 [2024-04-26 15:47:29.962948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:59.764 [2024-04-26 15:47:29.967424] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.765 [2024-04-26 15:47:29.967465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.765 [2024-04-26 15:47:29.967479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.765 [2024-04-26 15:47:29.970251] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.765 [2024-04-26 15:47:29.970300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.765 [2024-04-26 15:47:29.970316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:59.765 [2024-04-26 15:47:29.975701] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.765 [2024-04-26 15:47:29.975743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.765 [2024-04-26 15:47:29.975757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:59.765 [2024-04-26 15:47:29.980734] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.765 [2024-04-26 15:47:29.980775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.765 [2024-04-26 15:47:29.980789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:59.765 [2024-04-26 15:47:29.984394] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.765 [2024-04-26 15:47:29.984445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.765 [2024-04-26 15:47:29.984458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.765 [2024-04-26 15:47:29.988491] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.765 [2024-04-26 15:47:29.988531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.765 [2024-04-26 15:47:29.988545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:59.765 [2024-04-26 15:47:29.992995] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.765 [2024-04-26 15:47:29.993035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.765 [2024-04-26 15:47:29.993049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:59.765 [2024-04-26 15:47:29.997670] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.765 [2024-04-26 15:47:29.997715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.765 [2024-04-26 15:47:29.997729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:59.765 [2024-04-26 15:47:30.002382] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.765 [2024-04-26 15:47:30.002426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.765 [2024-04-26 15:47:30.002439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.765 [2024-04-26 15:47:30.005935] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.765 [2024-04-26 15:47:30.005976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.765 [2024-04-26 15:47:30.005989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:59.765 [2024-04-26 15:47:30.010907] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.765 [2024-04-26 15:47:30.010952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.765 [2024-04-26 15:47:30.010965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:59.765 [2024-04-26 15:47:30.015294] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.765 [2024-04-26 15:47:30.015334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.765 [2024-04-26 15:47:30.015348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:59.765 [2024-04-26 15:47:30.018602] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.765 [2024-04-26 15:47:30.018656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.765 [2024-04-26 15:47:30.018670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.765 [2024-04-26 15:47:30.022830] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.765 [2024-04-26 15:47:30.022873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.765 [2024-04-26 15:47:30.022887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:59.765 [2024-04-26 15:47:30.026476] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.765 [2024-04-26 15:47:30.026527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.765 [2024-04-26 15:47:30.026542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:59.765 [2024-04-26 15:47:30.031033] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.765 [2024-04-26 15:47:30.031081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.765 [2024-04-26 15:47:30.031095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:59.765 [2024-04-26 15:47:30.036576] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.765 [2024-04-26 15:47:30.036627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.765 [2024-04-26 15:47:30.036642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:28:59.765 [2024-04-26 15:47:30.040259] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.765 [2024-04-26 15:47:30.040298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.765 [2024-04-26 15:47:30.040312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:28:59.765 [2024-04-26 15:47:30.043828] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.765 [2024-04-26 15:47:30.043871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.765 [2024-04-26 15:47:30.043885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:28:59.765 [2024-04-26 15:47:30.048561] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.765 [2024-04-26 15:47:30.048610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.765 [2024-04-26 15:47:30.048625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:28:59.765 [2024-04-26 15:47:30.052538] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:28:59.765 [2024-04-26 15:47:30.052588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:59.765 [2024-04-26 15:47:30.052604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.073 [2024-04-26 15:47:30.056262] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.073 [2024-04-26 15:47:30.056306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.073 [2024-04-26 15:47:30.056320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.073 [2024-04-26 15:47:30.061108] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.073 [2024-04-26 15:47:30.061167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.073 [2024-04-26 15:47:30.061181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.073 [2024-04-26 15:47:30.065766] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.073 [2024-04-26 15:47:30.065819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.073 [2024-04-26 15:47:30.065834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.073 [2024-04-26 15:47:30.069789] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.073 [2024-04-26 15:47:30.069834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.073 [2024-04-26 15:47:30.069848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.073 [2024-04-26 15:47:30.073956] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.073 [2024-04-26 15:47:30.074005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.073 [2024-04-26 15:47:30.074018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.073 [2024-04-26 15:47:30.077683] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.073 [2024-04-26 15:47:30.077726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.073 [2024-04-26 15:47:30.077740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.073 [2024-04-26 15:47:30.081642] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.073 [2024-04-26 15:47:30.081685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.073 [2024-04-26 15:47:30.081699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.073 [2024-04-26 15:47:30.085276] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.073 [2024-04-26 15:47:30.085317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.073 [2024-04-26 15:47:30.085331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.073 [2024-04-26 15:47:30.089964] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.073 [2024-04-26 15:47:30.090007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.073 [2024-04-26 15:47:30.090021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.073 [2024-04-26 15:47:30.093694] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.073 [2024-04-26 15:47:30.093735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.073 [2024-04-26 15:47:30.093749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.073 [2024-04-26 15:47:30.097649] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.073 [2024-04-26 15:47:30.097700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.073 [2024-04-26 15:47:30.097715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.073 [2024-04-26 15:47:30.102214] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.073 [2024-04-26 15:47:30.102255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.073 [2024-04-26 15:47:30.102269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.073 [2024-04-26 15:47:30.106347] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.073 [2024-04-26 15:47:30.106390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.073 [2024-04-26 15:47:30.106404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.073 [2024-04-26 15:47:30.110953] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.073 [2024-04-26 15:47:30.110998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.073 [2024-04-26 15:47:30.111012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.073 [2024-04-26 15:47:30.114804] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.073 [2024-04-26 15:47:30.114847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.074 [2024-04-26 15:47:30.114861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.074 [2024-04-26 15:47:30.118291] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.074 [2024-04-26 15:47:30.118331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.074 [2024-04-26 15:47:30.118344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.074 [2024-04-26 15:47:30.122774] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.074 [2024-04-26 15:47:30.122817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.074 [2024-04-26 15:47:30.122830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.074 [2024-04-26 15:47:30.126805] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.074 [2024-04-26 15:47:30.126847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.074 [2024-04-26 15:47:30.126861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.074 [2024-04-26 15:47:30.130116] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.074 [2024-04-26 15:47:30.130165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.074 [2024-04-26 15:47:30.130179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.074 [2024-04-26 15:47:30.134970] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.074 [2024-04-26 15:47:30.135043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.074 [2024-04-26 15:47:30.135056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.074 [2024-04-26 15:47:30.138693] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.074 [2024-04-26 15:47:30.138734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.074 [2024-04-26 15:47:30.138748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.074 [2024-04-26 15:47:30.143248] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.074 [2024-04-26 15:47:30.143291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.074 [2024-04-26 15:47:30.143305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.074 [2024-04-26 15:47:30.148103] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.074 [2024-04-26 15:47:30.148173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.074 [2024-04-26 15:47:30.148188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.074 [2024-04-26 15:47:30.152463] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.074 [2024-04-26 15:47:30.152504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.074 [2024-04-26 15:47:30.152518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.074 [2024-04-26 15:47:30.155276] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.074 [2024-04-26 15:47:30.155314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.074 [2024-04-26 15:47:30.155337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.074 [2024-04-26 15:47:30.160148] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.074 [2024-04-26 15:47:30.160192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.074 [2024-04-26 15:47:30.160206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.074 [2024-04-26 15:47:30.164685] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.074 [2024-04-26 15:47:30.164729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.074 [2024-04-26 15:47:30.164743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.074 [2024-04-26 15:47:30.168206] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.074 [2024-04-26 15:47:30.168246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.074 [2024-04-26 15:47:30.168259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.074 [2024-04-26 15:47:30.172438] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.074 [2024-04-26 15:47:30.172480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.074 [2024-04-26 15:47:30.172493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.074 [2024-04-26 15:47:30.177698] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.074 [2024-04-26 15:47:30.177742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.074 [2024-04-26 15:47:30.177756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.074 [2024-04-26 15:47:30.182928] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.074 [2024-04-26 15:47:30.182979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.074 [2024-04-26 15:47:30.182994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.074 [2024-04-26 15:47:30.187683] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.074 [2024-04-26 15:47:30.187730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.074 [2024-04-26 15:47:30.187745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.074 [2024-04-26 15:47:30.190350] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.074 [2024-04-26 15:47:30.190389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.074 [2024-04-26 15:47:30.190403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.074 [2024-04-26 15:47:30.195348] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.074 [2024-04-26 15:47:30.195398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.074 [2024-04-26 15:47:30.195412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.074 [2024-04-26 15:47:30.199203] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.074 [2024-04-26 15:47:30.199250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.074 [2024-04-26 15:47:30.199270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.074 [2024-04-26 15:47:30.203537] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.074 [2024-04-26 15:47:30.203586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.074 [2024-04-26 15:47:30.203600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.074 [2024-04-26 15:47:30.207672] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.074 [2024-04-26 15:47:30.207722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.074 [2024-04-26 15:47:30.207741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.074 [2024-04-26 15:47:30.211533] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.074 [2024-04-26 15:47:30.211581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.074 [2024-04-26 15:47:30.211594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.074 [2024-04-26 15:47:30.215886] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.074 [2024-04-26 15:47:30.215930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.074 [2024-04-26 15:47:30.215944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.074 [2024-04-26 15:47:30.220581] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.074 [2024-04-26 15:47:30.220624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.074 [2024-04-26 15:47:30.220637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.074 [2024-04-26 15:47:30.223777] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.074 [2024-04-26 15:47:30.223816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.074 [2024-04-26 15:47:30.223829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.075 [2024-04-26 15:47:30.228230] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.075 [2024-04-26 15:47:30.228272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.075 [2024-04-26 15:47:30.228285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.075 [2024-04-26 15:47:30.232471] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.075 [2024-04-26 15:47:30.232512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.075 [2024-04-26 15:47:30.232526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.075 [2024-04-26 15:47:30.236321] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.075 [2024-04-26 15:47:30.236377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.075 [2024-04-26 15:47:30.236392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.075 [2024-04-26 15:47:30.240805] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.075 [2024-04-26 15:47:30.240846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.075 [2024-04-26 15:47:30.240868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.075 [2024-04-26 15:47:30.244667] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.075 [2024-04-26 15:47:30.244710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.075 [2024-04-26 15:47:30.244725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.075 [2024-04-26 15:47:30.249015] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.075 [2024-04-26 15:47:30.249057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.075 [2024-04-26 15:47:30.249072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.075 [2024-04-26 15:47:30.252933] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.075 [2024-04-26 15:47:30.252976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.075 [2024-04-26 15:47:30.252990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.075 [2024-04-26 15:47:30.256170] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.075 [2024-04-26 15:47:30.256211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.075 [2024-04-26 15:47:30.256224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.075 [2024-04-26 15:47:30.260799] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.075 [2024-04-26 15:47:30.260849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.075 [2024-04-26 15:47:30.260864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.075 [2024-04-26 15:47:30.265052] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.075 [2024-04-26 15:47:30.265096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.075 [2024-04-26 15:47:30.265109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.075 [2024-04-26 15:47:30.269080] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.075 [2024-04-26 15:47:30.269124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.075 [2024-04-26 15:47:30.269155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.075 [2024-04-26 15:47:30.272669] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.075 [2024-04-26 15:47:30.272710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.075 [2024-04-26 15:47:30.272723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.075 [2024-04-26 15:47:30.276713] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.075 [2024-04-26 15:47:30.276755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.075 [2024-04-26 15:47:30.276768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.075 [2024-04-26 15:47:30.280797] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.075 [2024-04-26 15:47:30.280837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.075 [2024-04-26 15:47:30.280851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.075 [2024-04-26 15:47:30.284949] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.075 [2024-04-26 15:47:30.285006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.075 [2024-04-26 15:47:30.285033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.075 [2024-04-26 15:47:30.288444] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.075 [2024-04-26 15:47:30.288504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.075 [2024-04-26 15:47:30.288531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.075 [2024-04-26 15:47:30.293209] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.075 [2024-04-26 15:47:30.293268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.075 [2024-04-26 15:47:30.293300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.075 [2024-04-26 15:47:30.297503] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.075 [2024-04-26 15:47:30.297576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.075 [2024-04-26 15:47:30.297599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.075 [2024-04-26 15:47:30.301873] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.075 [2024-04-26 15:47:30.301930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.075 [2024-04-26 15:47:30.301952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.075 [2024-04-26 15:47:30.305450] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.075 [2024-04-26 15:47:30.305502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.075 [2024-04-26 15:47:30.305524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.075 [2024-04-26 15:47:30.309990] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.075 [2024-04-26 15:47:30.310044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.075 [2024-04-26 15:47:30.310068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.075 [2024-04-26 15:47:30.314436] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.075 [2024-04-26 15:47:30.314487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.075 [2024-04-26 15:47:30.314510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.075 [2024-04-26 15:47:30.318520] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.075 [2024-04-26 15:47:30.318572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.075 [2024-04-26 15:47:30.318596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.075 [2024-04-26 15:47:30.322324] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.075 [2024-04-26 15:47:30.322374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.075 [2024-04-26 15:47:30.322398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.075 [2024-04-26 15:47:30.326944] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.075 [2024-04-26 15:47:30.326995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.075 [2024-04-26 15:47:30.327019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.075 [2024-04-26 15:47:30.330239] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.075 [2024-04-26 15:47:30.330291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.075 [2024-04-26 15:47:30.330321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.075 [2024-04-26 15:47:30.334351] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.076 [2024-04-26 15:47:30.334400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.076 [2024-04-26 15:47:30.334424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.076 [2024-04-26 15:47:30.339583] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.076 [2024-04-26 15:47:30.339634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.076 [2024-04-26 15:47:30.339657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.076 [2024-04-26 15:47:30.343947] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.076 [2024-04-26 15:47:30.343997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.076 [2024-04-26 15:47:30.344021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.076 [2024-04-26 15:47:30.347334] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.076 [2024-04-26 15:47:30.347384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.076 [2024-04-26 15:47:30.347408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.076 [2024-04-26 15:47:30.352525] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.076 [2024-04-26 15:47:30.352575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.076 [2024-04-26 15:47:30.352598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.076 [2024-04-26 15:47:30.357666] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.076 [2024-04-26 15:47:30.357717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.076 [2024-04-26 15:47:30.357740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.076 [2024-04-26 15:47:30.361175] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.076 [2024-04-26 15:47:30.361223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.076 [2024-04-26 15:47:30.361246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.335 [2024-04-26 15:47:30.365240] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.335 [2024-04-26 15:47:30.365289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.335 [2024-04-26 15:47:30.365311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.335 [2024-04-26 15:47:30.369919] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.335 [2024-04-26 15:47:30.369977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.335 [2024-04-26 15:47:30.370000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.335 [2024-04-26 15:47:30.373459] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.335 [2024-04-26 15:47:30.373508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.335 [2024-04-26 15:47:30.373531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.335 [2024-04-26 15:47:30.377709] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.335 [2024-04-26 15:47:30.377759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.335 [2024-04-26 15:47:30.377782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.335 [2024-04-26 15:47:30.382697] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.335 [2024-04-26 15:47:30.382745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.335 [2024-04-26 15:47:30.382770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.335 [2024-04-26 15:47:30.386316] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.335 [2024-04-26 15:47:30.386364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.336 [2024-04-26 15:47:30.386386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.336 [2024-04-26 15:47:30.390350] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.336 [2024-04-26 15:47:30.390399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.336 [2024-04-26 15:47:30.390422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.336 [2024-04-26 15:47:30.394962] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.336 [2024-04-26 15:47:30.395011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.336 [2024-04-26 15:47:30.395036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.336 [2024-04-26 15:47:30.398997] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.336 [2024-04-26 15:47:30.399047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.336 [2024-04-26 15:47:30.399069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.336 [2024-04-26 15:47:30.403056] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.336 [2024-04-26 15:47:30.403107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.336 [2024-04-26 15:47:30.403130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.336 [2024-04-26 15:47:30.406850] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.336 [2024-04-26 15:47:30.406900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.336 [2024-04-26 15:47:30.406923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.336 [2024-04-26 15:47:30.411353] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.336 [2024-04-26 15:47:30.411402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.336 [2024-04-26 15:47:30.411425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.336 [2024-04-26 15:47:30.414764] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.336 [2024-04-26 15:47:30.414812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.336 [2024-04-26 15:47:30.414835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.336 [2024-04-26 15:47:30.419435] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.336 [2024-04-26 15:47:30.419484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.336 [2024-04-26 15:47:30.419507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.336 [2024-04-26 15:47:30.424185] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.336 [2024-04-26 15:47:30.424234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.336 [2024-04-26 15:47:30.424257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.336 [2024-04-26 15:47:30.429105] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.336 [2024-04-26 15:47:30.429167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.336 [2024-04-26 15:47:30.429191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.336 [2024-04-26 15:47:30.432460] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.336 [2024-04-26 15:47:30.432509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.336 [2024-04-26 15:47:30.432531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.336 [2024-04-26 15:47:30.436682] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.336 [2024-04-26 15:47:30.436732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.336 [2024-04-26 15:47:30.436755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.336 [2024-04-26 15:47:30.440534] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.336 [2024-04-26 15:47:30.440583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.336 [2024-04-26 15:47:30.440606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.336 [2024-04-26 15:47:30.444845] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.336 [2024-04-26 15:47:30.444896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.336 [2024-04-26 15:47:30.444919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.336 [2024-04-26 15:47:30.449260] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.336 [2024-04-26 15:47:30.449309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.336 [2024-04-26 15:47:30.449333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.336 [2024-04-26 15:47:30.453250] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.336 [2024-04-26 15:47:30.453298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.336 [2024-04-26 15:47:30.453321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.336 [2024-04-26 15:47:30.457611] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.336 [2024-04-26 15:47:30.457662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.336 [2024-04-26 15:47:30.457684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.336 [2024-04-26 15:47:30.462096] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.336 [2024-04-26 15:47:30.462162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.336 [2024-04-26 15:47:30.462186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.336 [2024-04-26 15:47:30.465030] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.336 [2024-04-26 15:47:30.465078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.336 [2024-04-26 15:47:30.465101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.336 [2024-04-26 15:47:30.469988] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.336 [2024-04-26 15:47:30.470037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.336 [2024-04-26 15:47:30.470060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.336 [2024-04-26 15:47:30.474625] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.336 [2024-04-26 15:47:30.474674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.336 [2024-04-26 15:47:30.474698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.336 [2024-04-26 15:47:30.477703] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.336 [2024-04-26 15:47:30.477752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.336 [2024-04-26 15:47:30.477774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.336 [2024-04-26 15:47:30.482315] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.336 [2024-04-26 15:47:30.482365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.336 [2024-04-26 15:47:30.482388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.336 [2024-04-26 15:47:30.486661] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.336 [2024-04-26 15:47:30.486711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.336 [2024-04-26 15:47:30.486733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.336 [2024-04-26 15:47:30.490767] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.336 [2024-04-26 15:47:30.490816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.336 [2024-04-26 15:47:30.490838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.336 [2024-04-26 15:47:30.494747] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.336 [2024-04-26 15:47:30.494795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.336 [2024-04-26 15:47:30.494817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.336 [2024-04-26 15:47:30.499281] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.337 [2024-04-26 15:47:30.499329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.337 [2024-04-26 15:47:30.499352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.337 [2024-04-26 15:47:30.502950] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.337 [2024-04-26 15:47:30.502999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.337 [2024-04-26 15:47:30.503023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.337 [2024-04-26 15:47:30.506687] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.337 [2024-04-26 15:47:30.506736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.337 [2024-04-26 15:47:30.506758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.337 [2024-04-26 15:47:30.510328] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.337 [2024-04-26 15:47:30.510376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.337 [2024-04-26 15:47:30.510407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.337 [2024-04-26 15:47:30.514484] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.337 [2024-04-26 15:47:30.514533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.337 [2024-04-26 15:47:30.514555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.337 [2024-04-26 15:47:30.518569] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.337 [2024-04-26 15:47:30.518618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.337 [2024-04-26 15:47:30.518641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.337 [2024-04-26 15:47:30.522578] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.337 [2024-04-26 15:47:30.522628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.337 [2024-04-26 15:47:30.522651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.337 [2024-04-26 15:47:30.526646] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.337 [2024-04-26 15:47:30.526697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.337 [2024-04-26 15:47:30.526720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.337 [2024-04-26 15:47:30.531107] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.337 [2024-04-26 15:47:30.531183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.337 [2024-04-26 15:47:30.531206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.337 [2024-04-26 15:47:30.534841] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.337 [2024-04-26 15:47:30.534894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.337 [2024-04-26 15:47:30.534915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.337 [2024-04-26 15:47:30.539183] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.337 [2024-04-26 15:47:30.539231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.337 [2024-04-26 15:47:30.539254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.337 [2024-04-26 15:47:30.543224] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.337 [2024-04-26 15:47:30.543271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.337 [2024-04-26 15:47:30.543294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.337 [2024-04-26 15:47:30.547605] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.337 [2024-04-26 15:47:30.547653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.337 [2024-04-26 15:47:30.547678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.337 [2024-04-26 15:47:30.550792] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.337 [2024-04-26 15:47:30.550855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.337 [2024-04-26 15:47:30.550877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.337 [2024-04-26 15:47:30.554954] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.337 [2024-04-26 15:47:30.555003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.337 [2024-04-26 15:47:30.555026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.337 [2024-04-26 15:47:30.559875] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.337 [2024-04-26 15:47:30.559934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.337 [2024-04-26 15:47:30.559958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.337 [2024-04-26 15:47:30.564152] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.337 [2024-04-26 15:47:30.564203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.337 [2024-04-26 15:47:30.564226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.337 [2024-04-26 15:47:30.566999] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.337 [2024-04-26 15:47:30.567053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.337 [2024-04-26 15:47:30.567076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.337 [2024-04-26 15:47:30.571932] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.337 [2024-04-26 15:47:30.571983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.337 [2024-04-26 15:47:30.572006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.337 [2024-04-26 15:47:30.576498] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.337 [2024-04-26 15:47:30.576548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.337 [2024-04-26 15:47:30.576570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.337 [2024-04-26 15:47:30.580120] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.337 [2024-04-26 15:47:30.580181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.337 [2024-04-26 15:47:30.580203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.337 [2024-04-26 15:47:30.583770] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.337 [2024-04-26 15:47:30.583819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.337 [2024-04-26 15:47:30.583841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:00.337 [2024-04-26 15:47:30.588641] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.337 [2024-04-26 15:47:30.588690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.337 [2024-04-26 15:47:30.588712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:00.337 [2024-04-26 15:47:30.592391] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.337 [2024-04-26 15:47:30.592447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.337 [2024-04-26 15:47:30.592469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:00.337 [2024-04-26 15:47:30.596350] nvme_tcp.c:1447:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1292540) 00:29:00.337 [2024-04-26 15:47:30.596398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:00.337 [2024-04-26 15:47:30.596420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:00.337 00:29:00.337 Latency(us) 00:29:00.337 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:00.337 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:00.337 nvme0n1 : 2.00 7421.93 927.74 0.00 0.00 2151.58 629.29 10426.18 00:29:00.337 =================================================================================================================== 00:29:00.337 Total : 7421.93 927.74 0.00 0.00 2151.58 629.29 10426.18 00:29:00.337 0 00:29:00.337 15:47:30 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:00.337 15:47:30 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:00.338 15:47:30 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:00.338 15:47:30 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:00.338 | .driver_specific 00:29:00.338 | .nvme_error 00:29:00.338 | .status_code 00:29:00.338 | .command_transient_transport_error' 00:29:00.904 15:47:30 -- host/digest.sh@71 -- # (( 479 > 0 )) 00:29:00.904 15:47:30 -- host/digest.sh@73 -- # killprocess 85937 00:29:00.904 15:47:30 -- common/autotest_common.sh@936 -- # '[' -z 85937 ']' 00:29:00.904 15:47:30 -- common/autotest_common.sh@940 -- # kill -0 85937 00:29:00.904 15:47:30 -- common/autotest_common.sh@941 -- # uname 00:29:00.904 15:47:30 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:00.904 15:47:30 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85937 00:29:00.904 15:47:30 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:00.904 15:47:30 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:00.904 killing process with pid 85937 00:29:00.904 15:47:30 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85937' 00:29:00.904 15:47:30 -- common/autotest_common.sh@955 -- # kill 85937 00:29:00.904 Received shutdown signal, test time was about 2.000000 seconds 00:29:00.904 00:29:00.904 Latency(us) 00:29:00.904 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:00.904 =================================================================================================================== 00:29:00.904 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:00.904 15:47:30 -- common/autotest_common.sh@960 -- # wait 85937 00:29:00.904 15:47:31 -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:29:00.904 15:47:31 -- host/digest.sh@54 -- # local rw bs qd 00:29:00.904 15:47:31 -- host/digest.sh@56 -- # rw=randwrite 00:29:00.904 15:47:31 -- host/digest.sh@56 -- # bs=4096 00:29:00.904 15:47:31 -- host/digest.sh@56 -- # qd=128 00:29:00.904 15:47:31 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:29:00.904 15:47:31 -- host/digest.sh@58 -- # bperfpid=86033 00:29:00.904 15:47:31 -- host/digest.sh@60 -- # waitforlisten 86033 /var/tmp/bperf.sock 00:29:00.904 15:47:31 -- common/autotest_common.sh@817 -- # '[' -z 86033 ']' 00:29:00.904 15:47:31 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:00.904 15:47:31 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:00.904 15:47:31 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:00.904 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:00.904 15:47:31 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:00.904 15:47:31 -- common/autotest_common.sh@10 -- # set +x 00:29:01.162 [2024-04-26 15:47:31.233358] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:29:01.162 [2024-04-26 15:47:31.233481] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86033 ] 00:29:01.162 [2024-04-26 15:47:31.368259] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:01.420 [2024-04-26 15:47:31.487694] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:01.986 15:47:32 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:01.986 15:47:32 -- common/autotest_common.sh@850 -- # return 0 00:29:01.986 15:47:32 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:01.986 15:47:32 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:02.243 15:47:32 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:02.243 15:47:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:02.243 15:47:32 -- common/autotest_common.sh@10 -- # set +x 00:29:02.243 15:47:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:02.243 15:47:32 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:02.243 15:47:32 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:02.809 nvme0n1 00:29:02.809 15:47:32 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:02.809 15:47:32 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:02.809 15:47:32 -- common/autotest_common.sh@10 -- # set +x 00:29:02.809 15:47:32 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:02.809 15:47:32 -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:02.809 15:47:32 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:02.809 Running I/O for 2 seconds... 00:29:02.809 [2024-04-26 15:47:33.015522] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190f6458 00:29:02.809 [2024-04-26 15:47:33.016611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.809 [2024-04-26 15:47:33.016716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:02.809 [2024-04-26 15:47:33.028051] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190e5ec8 00:29:02.809 [2024-04-26 15:47:33.029272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:12999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.809 [2024-04-26 15:47:33.029315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:02.809 [2024-04-26 15:47:33.040834] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190e7818 00:29:02.809 [2024-04-26 15:47:33.042499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:16878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.809 [2024-04-26 15:47:33.042551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:02.809 [2024-04-26 15:47:33.048960] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190f4298 00:29:02.809 [2024-04-26 15:47:33.049732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:17566 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.809 [2024-04-26 15:47:33.049786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:02.809 [2024-04-26 15:47:33.060705] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190ddc00 00:29:02.809 [2024-04-26 15:47:33.061439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:10854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.809 [2024-04-26 15:47:33.061476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:29:02.809 [2024-04-26 15:47:33.074686] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190f57b0 00:29:02.809 [2024-04-26 15:47:33.076024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.809 [2024-04-26 15:47:33.076075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:02.809 [2024-04-26 15:47:33.083774] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190ef6a8 00:29:02.809 [2024-04-26 15:47:33.084511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:7147 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.809 [2024-04-26 15:47:33.084549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:02.809 [2024-04-26 15:47:33.097984] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190e9e10 00:29:02.809 [2024-04-26 15:47:33.099410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:02.809 [2024-04-26 15:47:33.099444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:03.068 [2024-04-26 15:47:33.110028] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190fc998 00:29:03.068 [2024-04-26 15:47:33.111415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16938 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.068 [2024-04-26 15:47:33.111455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:03.068 [2024-04-26 15:47:33.120940] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190f9b30 00:29:03.068 [2024-04-26 15:47:33.122499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:16577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.068 [2024-04-26 15:47:33.122536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:03.068 [2024-04-26 15:47:33.132482] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190fd640 00:29:03.068 [2024-04-26 15:47:33.133677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.068 [2024-04-26 15:47:33.133712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:03.068 [2024-04-26 15:47:33.146815] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190f0bc0 00:29:03.068 [2024-04-26 15:47:33.148653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:17608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.068 [2024-04-26 15:47:33.148704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:03.068 [2024-04-26 15:47:33.155356] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190e3498 00:29:03.068 [2024-04-26 15:47:33.156345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:12600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.068 [2024-04-26 15:47:33.156384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:29:03.068 [2024-04-26 15:47:33.169977] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190f5378 00:29:03.068 [2024-04-26 15:47:33.171623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:8245 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.068 [2024-04-26 15:47:33.171675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:03.068 [2024-04-26 15:47:33.180869] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190e27f0 00:29:03.068 [2024-04-26 15:47:33.182675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:2340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.068 [2024-04-26 15:47:33.182719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:03.068 [2024-04-26 15:47:33.192677] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190e5ec8 00:29:03.068 [2024-04-26 15:47:33.193981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.068 [2024-04-26 15:47:33.194057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:03.068 [2024-04-26 15:47:33.203547] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190ebb98 00:29:03.068 [2024-04-26 15:47:33.204400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:14432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.068 [2024-04-26 15:47:33.204497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:03.068 [2024-04-26 15:47:33.215415] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190efae0 00:29:03.068 [2024-04-26 15:47:33.216531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:18889 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.068 [2024-04-26 15:47:33.216621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.068 [2024-04-26 15:47:33.226123] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190f0bc0 00:29:03.068 [2024-04-26 15:47:33.227115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:14203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.068 [2024-04-26 15:47:33.227256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:29:03.068 [2024-04-26 15:47:33.238803] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190e7818 00:29:03.068 [2024-04-26 15:47:33.240578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.068 [2024-04-26 15:47:33.240651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.068 [2024-04-26 15:47:33.250458] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190e1f80 00:29:03.068 [2024-04-26 15:47:33.251766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:24541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.068 [2024-04-26 15:47:33.251837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:03.068 [2024-04-26 15:47:33.261991] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190fc998 00:29:03.068 [2024-04-26 15:47:33.263303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:12809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.068 [2024-04-26 15:47:33.263383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:03.068 [2024-04-26 15:47:33.274925] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190eb328 00:29:03.068 [2024-04-26 15:47:33.276811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.068 [2024-04-26 15:47:33.276913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:03.068 [2024-04-26 15:47:33.283731] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190f8e88 00:29:03.068 [2024-04-26 15:47:33.284670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:22067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.068 [2024-04-26 15:47:33.284730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:03.068 [2024-04-26 15:47:33.295494] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190e0a68 00:29:03.068 [2024-04-26 15:47:33.296415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:19804 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.068 [2024-04-26 15:47:33.296492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:29:03.068 [2024-04-26 15:47:33.309523] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190e3d08 00:29:03.068 [2024-04-26 15:47:33.311008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:3693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.068 [2024-04-26 15:47:33.311089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:03.068 [2024-04-26 15:47:33.320417] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190f4f40 00:29:03.068 [2024-04-26 15:47:33.321451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:8362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.068 [2024-04-26 15:47:33.321531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:29:03.068 [2024-04-26 15:47:33.333051] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190ddc00 00:29:03.068 [2024-04-26 15:47:33.334091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:25413 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.068 [2024-04-26 15:47:33.334219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:03.068 [2024-04-26 15:47:33.343230] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190fcdd0 00:29:03.068 [2024-04-26 15:47:33.343983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:9726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.068 [2024-04-26 15:47:33.344082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:29:03.068 [2024-04-26 15:47:33.355321] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190ebfd0 00:29:03.068 [2024-04-26 15:47:33.356393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:23389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.068 [2024-04-26 15:47:33.356542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:03.326 [2024-04-26 15:47:33.370603] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190f81e0 00:29:03.326 [2024-04-26 15:47:33.372499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:23358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.327 [2024-04-26 15:47:33.372573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:03.327 [2024-04-26 15:47:33.379268] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190ef270 00:29:03.327 [2024-04-26 15:47:33.380241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:18324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.327 [2024-04-26 15:47:33.380319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:03.327 [2024-04-26 15:47:33.393513] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190fb8b8 00:29:03.327 [2024-04-26 15:47:33.394889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:1100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.327 [2024-04-26 15:47:33.394968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:03.327 [2024-04-26 15:47:33.404628] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190e6b70 00:29:03.327 [2024-04-26 15:47:33.406644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:17974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.327 [2024-04-26 15:47:33.406740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:03.327 [2024-04-26 15:47:33.417208] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190e9168 00:29:03.327 [2024-04-26 15:47:33.418326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.327 [2024-04-26 15:47:33.418422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:03.327 [2024-04-26 15:47:33.432808] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190fb480 00:29:03.327 [2024-04-26 15:47:33.434551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.327 [2024-04-26 15:47:33.434645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:03.327 [2024-04-26 15:47:33.441650] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190fb480 00:29:03.327 [2024-04-26 15:47:33.442518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:17992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.327 [2024-04-26 15:47:33.442575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:29:03.327 [2024-04-26 15:47:33.453122] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190ed4e8 00:29:03.327 [2024-04-26 15:47:33.453970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.327 [2024-04-26 15:47:33.454062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:03.327 [2024-04-26 15:47:33.463914] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190f46d0 00:29:03.327 [2024-04-26 15:47:33.464680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7671 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.327 [2024-04-26 15:47:33.464799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:03.327 [2024-04-26 15:47:33.478466] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190f96f8 00:29:03.327 [2024-04-26 15:47:33.479901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:24603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.327 [2024-04-26 15:47:33.479995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:03.327 [2024-04-26 15:47:33.489680] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190dfdc0 00:29:03.327 [2024-04-26 15:47:33.491439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.327 [2024-04-26 15:47:33.491520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:03.327 [2024-04-26 15:47:33.501241] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190ea680 00:29:03.327 [2024-04-26 15:47:33.502245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20401 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.327 [2024-04-26 15:47:33.502315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:03.327 [2024-04-26 15:47:33.514418] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190ec408 00:29:03.327 [2024-04-26 15:47:33.516027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:21112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.327 [2024-04-26 15:47:33.516100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:03.327 [2024-04-26 15:47:33.522571] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190e4de8 00:29:03.327 [2024-04-26 15:47:33.523333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:6216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.327 [2024-04-26 15:47:33.523377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:03.327 [2024-04-26 15:47:33.537305] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190e12d8 00:29:03.327 [2024-04-26 15:47:33.538695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:10855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.327 [2024-04-26 15:47:33.538759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:03.327 [2024-04-26 15:47:33.546496] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190e0a68 00:29:03.327 [2024-04-26 15:47:33.547209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.327 [2024-04-26 15:47:33.547317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:29:03.327 [2024-04-26 15:47:33.560800] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190fc998 00:29:03.327 [2024-04-26 15:47:33.561968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.327 [2024-04-26 15:47:33.562056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:03.327 [2024-04-26 15:47:33.572153] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190ea248 00:29:03.327 [2024-04-26 15:47:33.573145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:1943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.327 [2024-04-26 15:47:33.573209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:03.327 [2024-04-26 15:47:33.585257] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190ea680 00:29:03.327 [2024-04-26 15:47:33.587027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19382 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.327 [2024-04-26 15:47:33.587096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:03.327 [2024-04-26 15:47:33.594365] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190f9b30 00:29:03.327 [2024-04-26 15:47:33.595533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:14443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.327 [2024-04-26 15:47:33.595567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:03.327 [2024-04-26 15:47:33.606246] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190fe2e8 00:29:03.327 [2024-04-26 15:47:33.607600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:19949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.327 [2024-04-26 15:47:33.607633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:29:03.327 [2024-04-26 15:47:33.616478] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190de038 00:29:03.327 [2024-04-26 15:47:33.617030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:7129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.327 [2024-04-26 15:47:33.617061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:29:03.587 [2024-04-26 15:47:33.627318] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190f57b0 00:29:03.587 [2024-04-26 15:47:33.628024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:6918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.587 [2024-04-26 15:47:33.628058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:29:03.587 [2024-04-26 15:47:33.640262] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190e84c0 00:29:03.587 [2024-04-26 15:47:33.641134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:6613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.587 [2024-04-26 15:47:33.641171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:03.587 [2024-04-26 15:47:33.653966] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190e27f0 00:29:03.587 [2024-04-26 15:47:33.655427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:6621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.587 [2024-04-26 15:47:33.655460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.587 [2024-04-26 15:47:33.666490] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190ed920 00:29:03.587 [2024-04-26 15:47:33.668411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:20690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.587 [2024-04-26 15:47:33.668446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.587 [2024-04-26 15:47:33.674507] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190e6738 00:29:03.587 [2024-04-26 15:47:33.675329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.587 [2024-04-26 15:47:33.675366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:29:03.587 [2024-04-26 15:47:33.686443] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190e01f8 00:29:03.587 [2024-04-26 15:47:33.687589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:24815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.587 [2024-04-26 15:47:33.687665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:29:03.587 [2024-04-26 15:47:33.699045] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190e6b70 00:29:03.587 [2024-04-26 15:47:33.700437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:23037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.587 [2024-04-26 15:47:33.700471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.587 [2024-04-26 15:47:33.712116] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190eee38 00:29:03.587 [2024-04-26 15:47:33.714057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:10801 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.587 [2024-04-26 15:47:33.714086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:03.587 [2024-04-26 15:47:33.720635] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190e84c0 00:29:03.587 [2024-04-26 15:47:33.721683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:20435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.587 [2024-04-26 15:47:33.721728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:03.587 [2024-04-26 15:47:33.734997] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190e12d8 00:29:03.587 [2024-04-26 15:47:33.736635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:7588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.587 [2024-04-26 15:47:33.736698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:03.587 [2024-04-26 15:47:33.745039] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190e6b70 00:29:03.587 [2024-04-26 15:47:33.745886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23266 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.587 [2024-04-26 15:47:33.745917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:03.587 [2024-04-26 15:47:33.755679] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190e7c50 00:29:03.587 [2024-04-26 15:47:33.757067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:6552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.588 [2024-04-26 15:47:33.757099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:03.588 [2024-04-26 15:47:33.767095] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190e5a90 00:29:03.588 [2024-04-26 15:47:33.767972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:15353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.588 [2024-04-26 15:47:33.768015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:03.588 [2024-04-26 15:47:33.777988] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190edd58 00:29:03.588 [2024-04-26 15:47:33.778864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.588 [2024-04-26 15:47:33.778896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:03.588 [2024-04-26 15:47:33.791687] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190e5658 00:29:03.588 [2024-04-26 15:47:33.793152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:22930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.588 [2024-04-26 15:47:33.793187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:03.588 [2024-04-26 15:47:33.801793] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190e8d30 00:29:03.588 [2024-04-26 15:47:33.802467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:23704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.588 [2024-04-26 15:47:33.802502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:03.588 [2024-04-26 15:47:33.813399] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190ec408 00:29:03.588 [2024-04-26 15:47:33.814392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:2937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.588 [2024-04-26 15:47:33.814426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:03.588 [2024-04-26 15:47:33.825899] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190e38d0 00:29:03.588 [2024-04-26 15:47:33.827375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:15918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.588 [2024-04-26 15:47:33.827414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:03.588 [2024-04-26 15:47:33.835993] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190fc998 00:29:03.588 [2024-04-26 15:47:33.836690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:8579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.588 [2024-04-26 15:47:33.836758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:03.588 [2024-04-26 15:47:33.847483] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190ef6a8 00:29:03.588 [2024-04-26 15:47:33.848477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:8885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.588 [2024-04-26 15:47:33.848549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:03.588 [2024-04-26 15:47:33.858734] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190f0ff8 00:29:03.588 [2024-04-26 15:47:33.859474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:13629 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.588 [2024-04-26 15:47:33.859530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:03.588 [2024-04-26 15:47:33.869979] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190e23b8 00:29:03.588 [2024-04-26 15:47:33.870946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.588 [2024-04-26 15:47:33.870991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:03.847 [2024-04-26 15:47:33.880389] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190e6300 00:29:03.847 [2024-04-26 15:47:33.881202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:8857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.847 [2024-04-26 15:47:33.881248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:03.847 [2024-04-26 15:47:33.893419] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190f1868 00:29:03.847 [2024-04-26 15:47:33.895213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.847 [2024-04-26 15:47:33.895290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:03.847 [2024-04-26 15:47:33.905027] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190e9168 00:29:03.847 [2024-04-26 15:47:33.906299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:16291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.847 [2024-04-26 15:47:33.906365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:03.847 [2024-04-26 15:47:33.917539] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190eaef0 00:29:03.847 [2024-04-26 15:47:33.918965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.847 [2024-04-26 15:47:33.919023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:03.847 [2024-04-26 15:47:33.927983] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190f0ff8 00:29:03.847 [2024-04-26 15:47:33.929289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:22315 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.847 [2024-04-26 15:47:33.929332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:03.847 [2024-04-26 15:47:33.939845] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190eea00 00:29:03.847 [2024-04-26 15:47:33.940898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:13302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.847 [2024-04-26 15:47:33.940944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:03.847 [2024-04-26 15:47:33.951200] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190ebb98 00:29:03.847 [2024-04-26 15:47:33.952458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.847 [2024-04-26 15:47:33.952508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:03.847 [2024-04-26 15:47:33.962447] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190eb760 00:29:03.848 [2024-04-26 15:47:33.963844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:17331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.848 [2024-04-26 15:47:33.963888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:03.848 [2024-04-26 15:47:33.974189] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190e0630 00:29:03.848 [2024-04-26 15:47:33.975216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.848 [2024-04-26 15:47:33.975255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:03.848 [2024-04-26 15:47:33.984432] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190dece0 00:29:03.848 [2024-04-26 15:47:33.985589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:12750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.848 [2024-04-26 15:47:33.985626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:03.848 [2024-04-26 15:47:33.996115] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190ea248 00:29:03.848 [2024-04-26 15:47:33.996903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:1279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.848 [2024-04-26 15:47:33.996946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:03.848 [2024-04-26 15:47:34.006430] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190fa7d8 00:29:03.848 [2024-04-26 15:47:34.007273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:3241 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.848 [2024-04-26 15:47:34.007313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:03.848 [2024-04-26 15:47:34.019082] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190fac10 00:29:03.848 [2024-04-26 15:47:34.019872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.848 [2024-04-26 15:47:34.019908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:03.848 [2024-04-26 15:47:34.032293] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190f0350 00:29:03.848 [2024-04-26 15:47:34.033887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:23590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.848 [2024-04-26 15:47:34.033919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:03.848 [2024-04-26 15:47:34.044356] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190e5a90 00:29:03.848 [2024-04-26 15:47:34.046105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:8158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.848 [2024-04-26 15:47:34.046146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:03.848 [2024-04-26 15:47:34.056324] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190e88f8 00:29:03.848 [2024-04-26 15:47:34.058228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:17625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.848 [2024-04-26 15:47:34.058289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:03.848 [2024-04-26 15:47:34.064709] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190f3e60 00:29:03.848 [2024-04-26 15:47:34.065696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.848 [2024-04-26 15:47:34.065727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:29:03.848 [2024-04-26 15:47:34.076888] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190f4298 00:29:03.848 [2024-04-26 15:47:34.078034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:25138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.848 [2024-04-26 15:47:34.078064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:03.848 [2024-04-26 15:47:34.088724] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190f6890 00:29:03.848 [2024-04-26 15:47:34.089654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:22327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.848 [2024-04-26 15:47:34.089711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:03.848 [2024-04-26 15:47:34.100066] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190e38d0 00:29:03.848 [2024-04-26 15:47:34.100741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:13233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.848 [2024-04-26 15:47:34.100789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:03.848 [2024-04-26 15:47:34.112274] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190e23b8 00:29:03.848 [2024-04-26 15:47:34.113462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:5189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.848 [2024-04-26 15:47:34.113501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:03.848 [2024-04-26 15:47:34.124976] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190e5a90 00:29:03.848 [2024-04-26 15:47:34.125937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4370 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.848 [2024-04-26 15:47:34.125977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:29:03.848 [2024-04-26 15:47:34.136488] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190fac10 00:29:03.848 [2024-04-26 15:47:34.137761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:13462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:03.848 [2024-04-26 15:47:34.137801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:04.107 [2024-04-26 15:47:34.147466] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190f20d8 00:29:04.107 [2024-04-26 15:47:34.148605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:24966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.107 [2024-04-26 15:47:34.148701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:04.107 [2024-04-26 15:47:34.158948] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190f2948 00:29:04.107 [2024-04-26 15:47:34.160238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:11922 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.107 [2024-04-26 15:47:34.160295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:04.107 [2024-04-26 15:47:34.171491] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190f46d0 00:29:04.107 [2024-04-26 15:47:34.172885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25049 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.107 [2024-04-26 15:47:34.172940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:04.107 [2024-04-26 15:47:34.181475] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190ec840 00:29:04.107 [2024-04-26 15:47:34.182532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.107 [2024-04-26 15:47:34.182565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:04.107 [2024-04-26 15:47:34.191993] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190fd640 00:29:04.107 [2024-04-26 15:47:34.192776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:7589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.107 [2024-04-26 15:47:34.192826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:04.107 [2024-04-26 15:47:34.205604] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190fb048 00:29:04.107 [2024-04-26 15:47:34.206988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:18829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.107 [2024-04-26 15:47:34.207058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:04.107 [2024-04-26 15:47:34.215508] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190f4f40 00:29:04.107 [2024-04-26 15:47:34.217378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:3974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.107 [2024-04-26 15:47:34.217440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:04.107 [2024-04-26 15:47:34.228246] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190ebfd0 00:29:04.107 [2024-04-26 15:47:34.229640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:20172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.107 [2024-04-26 15:47:34.229701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:04.107 [2024-04-26 15:47:34.240812] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190e99d8 00:29:04.107 [2024-04-26 15:47:34.242689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:8261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.107 [2024-04-26 15:47:34.242754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:04.107 [2024-04-26 15:47:34.250030] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190f4298 00:29:04.107 [2024-04-26 15:47:34.251247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:23706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.107 [2024-04-26 15:47:34.251297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:04.107 [2024-04-26 15:47:34.261800] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190e6b70 00:29:04.107 [2024-04-26 15:47:34.262551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:8460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.107 [2024-04-26 15:47:34.262643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:29:04.107 [2024-04-26 15:47:34.273311] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190eaef0 00:29:04.107 [2024-04-26 15:47:34.274351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:21487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.107 [2024-04-26 15:47:34.274398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:04.107 [2024-04-26 15:47:34.283840] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190e1f80 00:29:04.107 [2024-04-26 15:47:34.284773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.107 [2024-04-26 15:47:34.284823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:04.107 [2024-04-26 15:47:34.294979] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190e4de8 00:29:04.107 [2024-04-26 15:47:34.295904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:22926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.107 [2024-04-26 15:47:34.295982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:29:04.107 [2024-04-26 15:47:34.306935] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190f7da8 00:29:04.107 [2024-04-26 15:47:34.308054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:10061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.107 [2024-04-26 15:47:34.308109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:29:04.107 [2024-04-26 15:47:34.320707] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190e5ec8 00:29:04.107 [2024-04-26 15:47:34.322377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:24493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.107 [2024-04-26 15:47:34.322442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:04.107 [2024-04-26 15:47:34.330820] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190e5a90 00:29:04.107 [2024-04-26 15:47:34.331693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:17059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.107 [2024-04-26 15:47:34.331746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:04.107 [2024-04-26 15:47:34.342530] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190f3e60 00:29:04.108 [2024-04-26 15:47:34.343709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:13998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.108 [2024-04-26 15:47:34.343769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:04.108 [2024-04-26 15:47:34.354018] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190ecc78 00:29:04.108 [2024-04-26 15:47:34.355414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:16154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.108 [2024-04-26 15:47:34.355519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:04.108 [2024-04-26 15:47:34.367063] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190ed4e8 00:29:04.108 [2024-04-26 15:47:34.368867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.108 [2024-04-26 15:47:34.368921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:04.108 [2024-04-26 15:47:34.375268] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190f6890 00:29:04.108 [2024-04-26 15:47:34.375954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:4482 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.108 [2024-04-26 15:47:34.376024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:04.108 [2024-04-26 15:47:34.386664] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190fc560 00:29:04.108 [2024-04-26 15:47:34.387547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:20658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.108 [2024-04-26 15:47:34.387642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:04.108 [2024-04-26 15:47:34.398466] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190fd640 00:29:04.108 [2024-04-26 15:47:34.399217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:7987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.108 [2024-04-26 15:47:34.399286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:04.368 [2024-04-26 15:47:34.408713] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190e95a0 00:29:04.368 [2024-04-26 15:47:34.409435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.368 [2024-04-26 15:47:34.409499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:04.368 [2024-04-26 15:47:34.421222] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190f0bc0 00:29:04.368 [2024-04-26 15:47:34.422099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.368 [2024-04-26 15:47:34.422170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:04.368 [2024-04-26 15:47:34.431596] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190fc998 00:29:04.368 [2024-04-26 15:47:34.432307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.368 [2024-04-26 15:47:34.432390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:04.368 [2024-04-26 15:47:34.446645] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190efae0 00:29:04.368 [2024-04-26 15:47:34.448452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.368 [2024-04-26 15:47:34.448531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:04.368 [2024-04-26 15:47:34.454688] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190f92c0 00:29:04.368 [2024-04-26 15:47:34.455384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:19243 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.368 [2024-04-26 15:47:34.455419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:04.368 [2024-04-26 15:47:34.468319] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190e27f0 00:29:04.368 [2024-04-26 15:47:34.469656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:19589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.368 [2024-04-26 15:47:34.469688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:04.368 [2024-04-26 15:47:34.480161] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190f6458 00:29:04.368 [2024-04-26 15:47:34.481778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:6759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.368 [2024-04-26 15:47:34.481812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:29:04.368 [2024-04-26 15:47:34.489184] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190f7da8 00:29:04.368 [2024-04-26 15:47:34.490143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:24837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.368 [2024-04-26 15:47:34.490198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:04.368 [2024-04-26 15:47:34.503165] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190e6738 00:29:04.368 [2024-04-26 15:47:34.504757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:1313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.368 [2024-04-26 15:47:34.504806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:04.368 [2024-04-26 15:47:34.511537] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190f3a28 00:29:04.368 [2024-04-26 15:47:34.512239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:7017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.368 [2024-04-26 15:47:34.512285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:29:04.368 [2024-04-26 15:47:34.525462] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190df118 00:29:04.368 [2024-04-26 15:47:34.526660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:25010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.368 [2024-04-26 15:47:34.526760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:04.368 [2024-04-26 15:47:34.535849] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190e6300 00:29:04.368 [2024-04-26 15:47:34.537278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:18415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.368 [2024-04-26 15:47:34.537347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:04.368 [2024-04-26 15:47:34.547378] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190f0788 00:29:04.368 [2024-04-26 15:47:34.548428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.368 [2024-04-26 15:47:34.548475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:04.368 [2024-04-26 15:47:34.559002] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190fd640 00:29:04.368 [2024-04-26 15:47:34.559705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:10403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.368 [2024-04-26 15:47:34.559750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:04.368 [2024-04-26 15:47:34.570477] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190e3060 00:29:04.368 [2024-04-26 15:47:34.571543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:3350 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.368 [2024-04-26 15:47:34.571588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:29:04.368 [2024-04-26 15:47:34.582060] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190fe2e8 00:29:04.368 [2024-04-26 15:47:34.583130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:16782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.368 [2024-04-26 15:47:34.583191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:04.368 [2024-04-26 15:47:34.593962] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190fdeb0 00:29:04.368 [2024-04-26 15:47:34.594823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:14091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.368 [2024-04-26 15:47:34.594872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:04.368 [2024-04-26 15:47:34.605625] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190de470 00:29:04.368 [2024-04-26 15:47:34.606859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:17386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.368 [2024-04-26 15:47:34.606906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:29:04.368 [2024-04-26 15:47:34.617169] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190e6fa8 00:29:04.368 [2024-04-26 15:47:34.618385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.368 [2024-04-26 15:47:34.618434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:04.368 [2024-04-26 15:47:34.628019] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190f6890 00:29:04.368 [2024-04-26 15:47:34.629440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:22609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.368 [2024-04-26 15:47:34.629484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:04.368 [2024-04-26 15:47:34.639530] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190f20d8 00:29:04.368 [2024-04-26 15:47:34.640641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.368 [2024-04-26 15:47:34.640704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:04.368 [2024-04-26 15:47:34.653911] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190ecc78 00:29:04.368 [2024-04-26 15:47:34.655671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:16362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.368 [2024-04-26 15:47:34.655749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:04.628 [2024-04-26 15:47:34.662455] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190e4de8 00:29:04.628 [2024-04-26 15:47:34.663277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:2607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.628 [2024-04-26 15:47:34.663338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:04.628 [2024-04-26 15:47:34.676697] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190eaab8 00:29:04.628 [2024-04-26 15:47:34.678167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:17562 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.628 [2024-04-26 15:47:34.678227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:04.628 [2024-04-26 15:47:34.685771] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190edd58 00:29:04.628 [2024-04-26 15:47:34.686596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:24145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.628 [2024-04-26 15:47:34.686637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:29:04.628 [2024-04-26 15:47:34.699643] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190f96f8 00:29:04.628 [2024-04-26 15:47:34.700945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:11321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.628 [2024-04-26 15:47:34.700989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:04.628 [2024-04-26 15:47:34.710973] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190fc998 00:29:04.628 [2024-04-26 15:47:34.712430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.628 [2024-04-26 15:47:34.712468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:04.628 [2024-04-26 15:47:34.721881] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190de038 00:29:04.628 [2024-04-26 15:47:34.723266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:14012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.628 [2024-04-26 15:47:34.723306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:04.628 [2024-04-26 15:47:34.734451] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190f2948 00:29:04.628 [2024-04-26 15:47:34.735942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:24925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.628 [2024-04-26 15:47:34.735982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:04.628 [2024-04-26 15:47:34.744708] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190f8a50 00:29:04.628 [2024-04-26 15:47:34.745756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:17253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.628 [2024-04-26 15:47:34.745801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:04.628 [2024-04-26 15:47:34.756286] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190f7da8 00:29:04.628 [2024-04-26 15:47:34.757465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:16923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.628 [2024-04-26 15:47:34.757517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:04.628 [2024-04-26 15:47:34.770328] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190ebfd0 00:29:04.628 [2024-04-26 15:47:34.772091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:14583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.628 [2024-04-26 15:47:34.772132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:29:04.628 [2024-04-26 15:47:34.778616] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190e01f8 00:29:04.628 [2024-04-26 15:47:34.779472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:12060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.628 [2024-04-26 15:47:34.779509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:29:04.628 [2024-04-26 15:47:34.792897] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190e6738 00:29:04.628 [2024-04-26 15:47:34.794464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:8419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.628 [2024-04-26 15:47:34.794523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:04.628 [2024-04-26 15:47:34.803303] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190df550 00:29:04.628 [2024-04-26 15:47:34.804487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.628 [2024-04-26 15:47:34.804549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:04.628 [2024-04-26 15:47:34.815497] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190e7c50 00:29:04.628 [2024-04-26 15:47:34.816538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:16764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.628 [2024-04-26 15:47:34.816621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:04.628 [2024-04-26 15:47:34.827811] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190ddc00 00:29:04.628 [2024-04-26 15:47:34.829456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:19116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.628 [2024-04-26 15:47:34.829540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:29:04.628 [2024-04-26 15:47:34.836244] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190fac10 00:29:04.628 [2024-04-26 15:47:34.836989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:25375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.628 [2024-04-26 15:47:34.837065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:04.628 [2024-04-26 15:47:34.850378] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190fb8b8 00:29:04.628 [2024-04-26 15:47:34.851767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:4167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.628 [2024-04-26 15:47:34.851837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:04.628 [2024-04-26 15:47:34.860397] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190f1ca0 00:29:04.628 [2024-04-26 15:47:34.861010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:2158 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.628 [2024-04-26 15:47:34.861047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:04.628 [2024-04-26 15:47:34.874159] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190e6300 00:29:04.628 [2024-04-26 15:47:34.875832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:13467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.628 [2024-04-26 15:47:34.875874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:04.628 [2024-04-26 15:47:34.882491] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190f1430 00:29:04.628 [2024-04-26 15:47:34.883279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:11635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.628 [2024-04-26 15:47:34.883324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:04.628 [2024-04-26 15:47:34.897385] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190f7100 00:29:04.628 [2024-04-26 15:47:34.899075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12785 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.628 [2024-04-26 15:47:34.899113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:04.628 [2024-04-26 15:47:34.905739] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190f46d0 00:29:04.628 [2024-04-26 15:47:34.906547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:20957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.628 [2024-04-26 15:47:34.906582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:29:04.628 [2024-04-26 15:47:34.919670] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190f2d80 00:29:04.886 [2024-04-26 15:47:34.921013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.886 [2024-04-26 15:47:34.921068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:04.886 [2024-04-26 15:47:34.930307] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190f20d8 00:29:04.886 [2024-04-26 15:47:34.931731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:21031 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.886 [2024-04-26 15:47:34.931768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:04.886 [2024-04-26 15:47:34.941810] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190fc128 00:29:04.886 [2024-04-26 15:47:34.942815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:14111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.886 [2024-04-26 15:47:34.942851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:04.886 [2024-04-26 15:47:34.953272] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190e3498 00:29:04.886 [2024-04-26 15:47:34.954407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:10189 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.886 [2024-04-26 15:47:34.954442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:29:04.886 [2024-04-26 15:47:34.967341] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190e23b8 00:29:04.886 [2024-04-26 15:47:34.969123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:18681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.886 [2024-04-26 15:47:34.969174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:29:04.887 [2024-04-26 15:47:34.977459] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190de038 00:29:04.887 [2024-04-26 15:47:34.978442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.887 [2024-04-26 15:47:34.978478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:04.887 [2024-04-26 15:47:34.987862] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190e8088 00:29:04.887 [2024-04-26 15:47:34.989006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20024 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.887 [2024-04-26 15:47:34.989045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:29:04.887 [2024-04-26 15:47:34.999613] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bb00) with pdu=0x2000190e5220 00:29:04.887 [2024-04-26 15:47:35.000318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:1069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:04.887 [2024-04-26 15:47:35.000369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:04.887 00:29:04.887 Latency(us) 00:29:04.887 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:04.887 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:04.887 nvme0n1 : 2.00 21889.00 85.50 0.00 0.00 5839.08 2189.50 16324.42 00:29:04.887 =================================================================================================================== 00:29:04.887 Total : 21889.00 85.50 0.00 0.00 5839.08 2189.50 16324.42 00:29:04.887 0 00:29:04.887 15:47:35 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:04.887 15:47:35 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:04.887 | .driver_specific 00:29:04.887 | .nvme_error 00:29:04.887 | .status_code 00:29:04.887 | .command_transient_transport_error' 00:29:04.887 15:47:35 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:04.887 15:47:35 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:05.144 15:47:35 -- host/digest.sh@71 -- # (( 172 > 0 )) 00:29:05.144 15:47:35 -- host/digest.sh@73 -- # killprocess 86033 00:29:05.144 15:47:35 -- common/autotest_common.sh@936 -- # '[' -z 86033 ']' 00:29:05.144 15:47:35 -- common/autotest_common.sh@940 -- # kill -0 86033 00:29:05.144 15:47:35 -- common/autotest_common.sh@941 -- # uname 00:29:05.144 15:47:35 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:05.144 15:47:35 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86033 00:29:05.144 15:47:35 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:05.144 15:47:35 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:05.144 killing process with pid 86033 00:29:05.144 15:47:35 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86033' 00:29:05.144 Received shutdown signal, test time was about 2.000000 seconds 00:29:05.144 00:29:05.144 Latency(us) 00:29:05.144 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:05.144 =================================================================================================================== 00:29:05.145 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:05.145 15:47:35 -- common/autotest_common.sh@955 -- # kill 86033 00:29:05.145 15:47:35 -- common/autotest_common.sh@960 -- # wait 86033 00:29:05.711 15:47:35 -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:29:05.711 15:47:35 -- host/digest.sh@54 -- # local rw bs qd 00:29:05.711 15:47:35 -- host/digest.sh@56 -- # rw=randwrite 00:29:05.711 15:47:35 -- host/digest.sh@56 -- # bs=131072 00:29:05.711 15:47:35 -- host/digest.sh@56 -- # qd=16 00:29:05.711 15:47:35 -- host/digest.sh@58 -- # bperfpid=86118 00:29:05.711 15:47:35 -- host/digest.sh@57 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:29:05.711 15:47:35 -- host/digest.sh@60 -- # waitforlisten 86118 /var/tmp/bperf.sock 00:29:05.711 15:47:35 -- common/autotest_common.sh@817 -- # '[' -z 86118 ']' 00:29:05.711 15:47:35 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:05.711 15:47:35 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:05.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:05.711 15:47:35 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:05.711 15:47:35 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:05.711 15:47:35 -- common/autotest_common.sh@10 -- # set +x 00:29:05.711 [2024-04-26 15:47:35.781258] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:29:05.711 [2024-04-26 15:47:35.781410] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86118 ] 00:29:05.711 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:05.711 Zero copy mechanism will not be used. 00:29:05.711 [2024-04-26 15:47:35.922242] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:05.969 [2024-04-26 15:47:36.077256] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:06.535 15:47:36 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:06.535 15:47:36 -- common/autotest_common.sh@850 -- # return 0 00:29:06.535 15:47:36 -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:06.535 15:47:36 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:06.793 15:47:36 -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:06.793 15:47:36 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:06.793 15:47:36 -- common/autotest_common.sh@10 -- # set +x 00:29:06.793 15:47:36 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:06.793 15:47:36 -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:06.793 15:47:37 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:07.050 nvme0n1 00:29:07.050 15:47:37 -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:07.050 15:47:37 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:07.050 15:47:37 -- common/autotest_common.sh@10 -- # set +x 00:29:07.309 15:47:37 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:07.309 15:47:37 -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:07.309 15:47:37 -- host/digest.sh@19 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:07.309 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:07.309 Zero copy mechanism will not be used. 00:29:07.309 Running I/O for 2 seconds... 00:29:07.309 [2024-04-26 15:47:37.477922] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.309 [2024-04-26 15:47:37.478287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.309 [2024-04-26 15:47:37.478331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.309 [2024-04-26 15:47:37.483738] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.309 [2024-04-26 15:47:37.484044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.309 [2024-04-26 15:47:37.484080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.309 [2024-04-26 15:47:37.489345] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.309 [2024-04-26 15:47:37.489656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.309 [2024-04-26 15:47:37.489696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.309 [2024-04-26 15:47:37.494904] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.309 [2024-04-26 15:47:37.495225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.309 [2024-04-26 15:47:37.495271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.309 [2024-04-26 15:47:37.500615] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.309 [2024-04-26 15:47:37.500924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.309 [2024-04-26 15:47:37.500951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.309 [2024-04-26 15:47:37.506224] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.309 [2024-04-26 15:47:37.506549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.309 [2024-04-26 15:47:37.506591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.309 [2024-04-26 15:47:37.512051] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.309 [2024-04-26 15:47:37.512417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.309 [2024-04-26 15:47:37.512461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.309 [2024-04-26 15:47:37.517922] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.309 [2024-04-26 15:47:37.518278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.309 [2024-04-26 15:47:37.518312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.309 [2024-04-26 15:47:37.523678] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.309 [2024-04-26 15:47:37.524012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.309 [2024-04-26 15:47:37.524054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.309 [2024-04-26 15:47:37.529412] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.309 [2024-04-26 15:47:37.529737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.309 [2024-04-26 15:47:37.529770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.309 [2024-04-26 15:47:37.535075] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.309 [2024-04-26 15:47:37.535419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.309 [2024-04-26 15:47:37.535459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.309 [2024-04-26 15:47:37.540642] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.309 [2024-04-26 15:47:37.540934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.309 [2024-04-26 15:47:37.540968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.309 [2024-04-26 15:47:37.546117] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.309 [2024-04-26 15:47:37.546454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.309 [2024-04-26 15:47:37.546493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.309 [2024-04-26 15:47:37.551698] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.309 [2024-04-26 15:47:37.552004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.309 [2024-04-26 15:47:37.552038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.309 [2024-04-26 15:47:37.557305] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.309 [2024-04-26 15:47:37.557611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.309 [2024-04-26 15:47:37.557644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.309 [2024-04-26 15:47:37.562900] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.309 [2024-04-26 15:47:37.563219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.310 [2024-04-26 15:47:37.563255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.310 [2024-04-26 15:47:37.568494] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.310 [2024-04-26 15:47:37.568786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.310 [2024-04-26 15:47:37.568821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.310 [2024-04-26 15:47:37.574082] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.310 [2024-04-26 15:47:37.574406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.310 [2024-04-26 15:47:37.574445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.310 [2024-04-26 15:47:37.579678] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.310 [2024-04-26 15:47:37.579980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.310 [2024-04-26 15:47:37.580015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.310 [2024-04-26 15:47:37.585429] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.310 [2024-04-26 15:47:37.585735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.310 [2024-04-26 15:47:37.585776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.310 [2024-04-26 15:47:37.591045] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.310 [2024-04-26 15:47:37.591364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.310 [2024-04-26 15:47:37.591403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.310 [2024-04-26 15:47:37.596676] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.310 [2024-04-26 15:47:37.596987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.310 [2024-04-26 15:47:37.597020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.568 [2024-04-26 15:47:37.602267] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.568 [2024-04-26 15:47:37.602577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.568 [2024-04-26 15:47:37.602609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.568 [2024-04-26 15:47:37.607880] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.568 [2024-04-26 15:47:37.608200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.568 [2024-04-26 15:47:37.608245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.568 [2024-04-26 15:47:37.613519] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.568 [2024-04-26 15:47:37.613808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.569 [2024-04-26 15:47:37.613843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.569 [2024-04-26 15:47:37.619064] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.569 [2024-04-26 15:47:37.619365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.569 [2024-04-26 15:47:37.619403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.569 [2024-04-26 15:47:37.624569] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.569 [2024-04-26 15:47:37.624883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.569 [2024-04-26 15:47:37.624916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.569 [2024-04-26 15:47:37.630114] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.569 [2024-04-26 15:47:37.630432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.569 [2024-04-26 15:47:37.630476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.569 [2024-04-26 15:47:37.635751] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.569 [2024-04-26 15:47:37.636053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.569 [2024-04-26 15:47:37.636085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.569 [2024-04-26 15:47:37.641319] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.569 [2024-04-26 15:47:37.641628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.569 [2024-04-26 15:47:37.641660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.569 [2024-04-26 15:47:37.646937] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.569 [2024-04-26 15:47:37.647256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.569 [2024-04-26 15:47:37.647291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.569 [2024-04-26 15:47:37.652598] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.569 [2024-04-26 15:47:37.652890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.569 [2024-04-26 15:47:37.652926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.569 [2024-04-26 15:47:37.658166] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.569 [2024-04-26 15:47:37.658470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.569 [2024-04-26 15:47:37.658503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.569 [2024-04-26 15:47:37.663718] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.569 [2024-04-26 15:47:37.664005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.569 [2024-04-26 15:47:37.664041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.569 [2024-04-26 15:47:37.669305] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.569 [2024-04-26 15:47:37.669610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.569 [2024-04-26 15:47:37.669645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.569 [2024-04-26 15:47:37.674862] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.569 [2024-04-26 15:47:37.675177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.569 [2024-04-26 15:47:37.675206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.569 [2024-04-26 15:47:37.680423] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.569 [2024-04-26 15:47:37.680710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.569 [2024-04-26 15:47:37.680744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.569 [2024-04-26 15:47:37.686019] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.569 [2024-04-26 15:47:37.686335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.569 [2024-04-26 15:47:37.686369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.569 [2024-04-26 15:47:37.691656] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.569 [2024-04-26 15:47:37.691961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.569 [2024-04-26 15:47:37.691995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.569 [2024-04-26 15:47:37.697232] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.569 [2024-04-26 15:47:37.697536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.569 [2024-04-26 15:47:37.697569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.569 [2024-04-26 15:47:37.702796] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.569 [2024-04-26 15:47:37.703100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.569 [2024-04-26 15:47:37.703132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.569 [2024-04-26 15:47:37.708331] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.569 [2024-04-26 15:47:37.708648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.569 [2024-04-26 15:47:37.708694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.569 [2024-04-26 15:47:37.713929] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.569 [2024-04-26 15:47:37.714246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.569 [2024-04-26 15:47:37.714281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.569 [2024-04-26 15:47:37.719505] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.569 [2024-04-26 15:47:37.719796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.569 [2024-04-26 15:47:37.719837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.569 [2024-04-26 15:47:37.725089] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.569 [2024-04-26 15:47:37.725397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.569 [2024-04-26 15:47:37.725430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.569 [2024-04-26 15:47:37.730735] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.569 [2024-04-26 15:47:37.731041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.569 [2024-04-26 15:47:37.731075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.569 [2024-04-26 15:47:37.736324] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.569 [2024-04-26 15:47:37.736640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.569 [2024-04-26 15:47:37.736674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.569 [2024-04-26 15:47:37.741929] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.569 [2024-04-26 15:47:37.742246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.569 [2024-04-26 15:47:37.742279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.569 [2024-04-26 15:47:37.747543] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.569 [2024-04-26 15:47:37.747852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.569 [2024-04-26 15:47:37.747887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.569 [2024-04-26 15:47:37.753152] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.569 [2024-04-26 15:47:37.753457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.569 [2024-04-26 15:47:37.753491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.569 [2024-04-26 15:47:37.758654] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.569 [2024-04-26 15:47:37.758955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.569 [2024-04-26 15:47:37.758987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.569 [2024-04-26 15:47:37.764182] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.570 [2024-04-26 15:47:37.764495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.570 [2024-04-26 15:47:37.764530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.570 [2024-04-26 15:47:37.769740] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.570 [2024-04-26 15:47:37.770042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.570 [2024-04-26 15:47:37.770074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.570 [2024-04-26 15:47:37.775297] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.570 [2024-04-26 15:47:37.775601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.570 [2024-04-26 15:47:37.775633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.570 [2024-04-26 15:47:37.780942] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.570 [2024-04-26 15:47:37.781260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.570 [2024-04-26 15:47:37.781292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.570 [2024-04-26 15:47:37.786517] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.570 [2024-04-26 15:47:37.786804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.570 [2024-04-26 15:47:37.786839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.570 [2024-04-26 15:47:37.792131] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.570 [2024-04-26 15:47:37.792462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.570 [2024-04-26 15:47:37.792495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.570 [2024-04-26 15:47:37.797788] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.570 [2024-04-26 15:47:37.798094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.570 [2024-04-26 15:47:37.798126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.570 [2024-04-26 15:47:37.803356] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.570 [2024-04-26 15:47:37.803660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.570 [2024-04-26 15:47:37.803702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.570 [2024-04-26 15:47:37.808944] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.570 [2024-04-26 15:47:37.809270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.570 [2024-04-26 15:47:37.809305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.570 [2024-04-26 15:47:37.814618] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.570 [2024-04-26 15:47:37.814925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.570 [2024-04-26 15:47:37.814957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.570 [2024-04-26 15:47:37.820181] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.570 [2024-04-26 15:47:37.820501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.570 [2024-04-26 15:47:37.820533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.570 [2024-04-26 15:47:37.825696] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.570 [2024-04-26 15:47:37.826006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.570 [2024-04-26 15:47:37.826063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.570 [2024-04-26 15:47:37.831427] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.570 [2024-04-26 15:47:37.831731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.570 [2024-04-26 15:47:37.831767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.570 [2024-04-26 15:47:37.836976] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.570 [2024-04-26 15:47:37.837289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.570 [2024-04-26 15:47:37.837322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.570 [2024-04-26 15:47:37.842507] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.570 [2024-04-26 15:47:37.842816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.570 [2024-04-26 15:47:37.842853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.570 [2024-04-26 15:47:37.848240] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.570 [2024-04-26 15:47:37.848540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.570 [2024-04-26 15:47:37.848576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.570 [2024-04-26 15:47:37.853510] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.570 [2024-04-26 15:47:37.853782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.570 [2024-04-26 15:47:37.853816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.570 [2024-04-26 15:47:37.858781] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.570 [2024-04-26 15:47:37.859057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.570 [2024-04-26 15:47:37.859091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.829 [2024-04-26 15:47:37.864076] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.829 [2024-04-26 15:47:37.864482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.829 [2024-04-26 15:47:37.864541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.829 [2024-04-26 15:47:37.869257] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.829 [2024-04-26 15:47:37.869501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.829 [2024-04-26 15:47:37.869538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.829 [2024-04-26 15:47:37.874168] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.829 [2024-04-26 15:47:37.874399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.829 [2024-04-26 15:47:37.874434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.829 [2024-04-26 15:47:37.878978] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.829 [2024-04-26 15:47:37.879224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.829 [2024-04-26 15:47:37.879257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.829 [2024-04-26 15:47:37.883736] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.829 [2024-04-26 15:47:37.883953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.829 [2024-04-26 15:47:37.883986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.829 [2024-04-26 15:47:37.888682] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.829 [2024-04-26 15:47:37.888897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.830 [2024-04-26 15:47:37.888930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.830 [2024-04-26 15:47:37.893502] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.830 [2024-04-26 15:47:37.893716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.830 [2024-04-26 15:47:37.893751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.830 [2024-04-26 15:47:37.898265] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.830 [2024-04-26 15:47:37.898484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.830 [2024-04-26 15:47:37.898517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.830 [2024-04-26 15:47:37.903128] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.830 [2024-04-26 15:47:37.903362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.830 [2024-04-26 15:47:37.903394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.830 [2024-04-26 15:47:37.908057] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.830 [2024-04-26 15:47:37.908287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.830 [2024-04-26 15:47:37.908319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.830 [2024-04-26 15:47:37.913020] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.830 [2024-04-26 15:47:37.913260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.830 [2024-04-26 15:47:37.913290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.830 [2024-04-26 15:47:37.917912] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.830 [2024-04-26 15:47:37.918127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.830 [2024-04-26 15:47:37.918172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.830 [2024-04-26 15:47:37.922778] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.830 [2024-04-26 15:47:37.923032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.830 [2024-04-26 15:47:37.923087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.830 [2024-04-26 15:47:37.927706] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.830 [2024-04-26 15:47:37.928074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.830 [2024-04-26 15:47:37.928118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.830 [2024-04-26 15:47:37.932478] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.830 [2024-04-26 15:47:37.932593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.830 [2024-04-26 15:47:37.932617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.830 [2024-04-26 15:47:37.937386] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.830 [2024-04-26 15:47:37.937608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.830 [2024-04-26 15:47:37.937643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.830 [2024-04-26 15:47:37.942181] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.830 [2024-04-26 15:47:37.942297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.830 [2024-04-26 15:47:37.942323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.830 [2024-04-26 15:47:37.946956] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.830 [2024-04-26 15:47:37.947140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.830 [2024-04-26 15:47:37.947200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.830 [2024-04-26 15:47:37.951873] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.830 [2024-04-26 15:47:37.951962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.830 [2024-04-26 15:47:37.951988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.830 [2024-04-26 15:47:37.956834] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.830 [2024-04-26 15:47:37.956946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.830 [2024-04-26 15:47:37.956970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.830 [2024-04-26 15:47:37.961629] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.830 [2024-04-26 15:47:37.961718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.830 [2024-04-26 15:47:37.961744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.830 [2024-04-26 15:47:37.966533] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.830 [2024-04-26 15:47:37.966669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.830 [2024-04-26 15:47:37.966693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.830 [2024-04-26 15:47:37.971335] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.830 [2024-04-26 15:47:37.971511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.830 [2024-04-26 15:47:37.971534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.830 [2024-04-26 15:47:37.976180] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.830 [2024-04-26 15:47:37.976259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.830 [2024-04-26 15:47:37.976283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.830 [2024-04-26 15:47:37.981015] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.830 [2024-04-26 15:47:37.981133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.830 [2024-04-26 15:47:37.981174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.830 [2024-04-26 15:47:37.985926] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.830 [2024-04-26 15:47:37.986035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.830 [2024-04-26 15:47:37.986063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.830 [2024-04-26 15:47:37.990785] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.830 [2024-04-26 15:47:37.991079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.830 [2024-04-26 15:47:37.991128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.830 [2024-04-26 15:47:37.995663] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.830 [2024-04-26 15:47:37.995761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.830 [2024-04-26 15:47:37.995787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.830 [2024-04-26 15:47:38.000497] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.830 [2024-04-26 15:47:38.000677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.830 [2024-04-26 15:47:38.000702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.830 [2024-04-26 15:47:38.005435] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.830 [2024-04-26 15:47:38.005554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.830 [2024-04-26 15:47:38.005579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.830 [2024-04-26 15:47:38.010325] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.830 [2024-04-26 15:47:38.010463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.830 [2024-04-26 15:47:38.010489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.830 [2024-04-26 15:47:38.015197] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.830 [2024-04-26 15:47:38.015310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.830 [2024-04-26 15:47:38.015334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.830 [2024-04-26 15:47:38.020018] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.830 [2024-04-26 15:47:38.020154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.831 [2024-04-26 15:47:38.020180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.831 [2024-04-26 15:47:38.024814] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.831 [2024-04-26 15:47:38.024955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.831 [2024-04-26 15:47:38.024990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.831 [2024-04-26 15:47:38.029675] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.831 [2024-04-26 15:47:38.029763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.831 [2024-04-26 15:47:38.029792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.831 [2024-04-26 15:47:38.034654] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.831 [2024-04-26 15:47:38.034783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.831 [2024-04-26 15:47:38.034819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.831 [2024-04-26 15:47:38.039511] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.831 [2024-04-26 15:47:38.039593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.831 [2024-04-26 15:47:38.039620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.831 [2024-04-26 15:47:38.044542] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.831 [2024-04-26 15:47:38.044658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.831 [2024-04-26 15:47:38.044683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.831 [2024-04-26 15:47:38.049418] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.831 [2024-04-26 15:47:38.049533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.831 [2024-04-26 15:47:38.049557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.831 [2024-04-26 15:47:38.054296] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.831 [2024-04-26 15:47:38.054424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.831 [2024-04-26 15:47:38.054447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.831 [2024-04-26 15:47:38.059095] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.831 [2024-04-26 15:47:38.059219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.831 [2024-04-26 15:47:38.059254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.831 [2024-04-26 15:47:38.063887] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.831 [2024-04-26 15:47:38.063964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.831 [2024-04-26 15:47:38.063990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.831 [2024-04-26 15:47:38.068772] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.831 [2024-04-26 15:47:38.068865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.831 [2024-04-26 15:47:38.068893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.831 [2024-04-26 15:47:38.073597] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.831 [2024-04-26 15:47:38.073669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.831 [2024-04-26 15:47:38.073695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.831 [2024-04-26 15:47:38.078422] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.831 [2024-04-26 15:47:38.078509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.831 [2024-04-26 15:47:38.078534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.831 [2024-04-26 15:47:38.083330] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.831 [2024-04-26 15:47:38.083427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.831 [2024-04-26 15:47:38.083452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.831 [2024-04-26 15:47:38.088067] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.831 [2024-04-26 15:47:38.088177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.831 [2024-04-26 15:47:38.088214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.831 [2024-04-26 15:47:38.092871] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.831 [2024-04-26 15:47:38.092950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.831 [2024-04-26 15:47:38.092976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.831 [2024-04-26 15:47:38.097700] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.831 [2024-04-26 15:47:38.097778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.831 [2024-04-26 15:47:38.097803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:07.831 [2024-04-26 15:47:38.102586] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.831 [2024-04-26 15:47:38.102679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.831 [2024-04-26 15:47:38.102705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:07.831 [2024-04-26 15:47:38.107470] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.831 [2024-04-26 15:47:38.107650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.831 [2024-04-26 15:47:38.107692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:07.831 [2024-04-26 15:47:38.112271] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.831 [2024-04-26 15:47:38.112380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.831 [2024-04-26 15:47:38.112406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:07.831 [2024-04-26 15:47:38.117189] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:07.831 [2024-04-26 15:47:38.117313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:07.831 [2024-04-26 15:47:38.117338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.091 [2024-04-26 15:47:38.122013] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.091 [2024-04-26 15:47:38.122111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.091 [2024-04-26 15:47:38.122152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.091 [2024-04-26 15:47:38.126954] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.091 [2024-04-26 15:47:38.127046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.091 [2024-04-26 15:47:38.127072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.091 [2024-04-26 15:47:38.131869] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.091 [2024-04-26 15:47:38.131951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.091 [2024-04-26 15:47:38.131975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.091 [2024-04-26 15:47:38.136747] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.091 [2024-04-26 15:47:38.136883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.091 [2024-04-26 15:47:38.136907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.091 [2024-04-26 15:47:38.141574] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.091 [2024-04-26 15:47:38.141707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.091 [2024-04-26 15:47:38.141731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.091 [2024-04-26 15:47:38.146602] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.091 [2024-04-26 15:47:38.146693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.091 [2024-04-26 15:47:38.146717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.091 [2024-04-26 15:47:38.151488] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.091 [2024-04-26 15:47:38.151568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.091 [2024-04-26 15:47:38.151592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.091 [2024-04-26 15:47:38.156449] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.091 [2024-04-26 15:47:38.156578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.091 [2024-04-26 15:47:38.156600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.091 [2024-04-26 15:47:38.161445] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.091 [2024-04-26 15:47:38.161565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.091 [2024-04-26 15:47:38.161588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.091 [2024-04-26 15:47:38.166383] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.091 [2024-04-26 15:47:38.166480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.091 [2024-04-26 15:47:38.166503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.091 [2024-04-26 15:47:38.171205] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.091 [2024-04-26 15:47:38.171318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.091 [2024-04-26 15:47:38.171340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.091 [2024-04-26 15:47:38.176173] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.091 [2024-04-26 15:47:38.176304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.091 [2024-04-26 15:47:38.176326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.091 [2024-04-26 15:47:38.181039] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.091 [2024-04-26 15:47:38.181150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.091 [2024-04-26 15:47:38.181173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.091 [2024-04-26 15:47:38.185969] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.091 [2024-04-26 15:47:38.186050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.091 [2024-04-26 15:47:38.186073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.091 [2024-04-26 15:47:38.190902] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.091 [2024-04-26 15:47:38.191022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.091 [2024-04-26 15:47:38.191045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.091 [2024-04-26 15:47:38.195757] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.091 [2024-04-26 15:47:38.195873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.091 [2024-04-26 15:47:38.195896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.091 [2024-04-26 15:47:38.200664] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.091 [2024-04-26 15:47:38.200762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.091 [2024-04-26 15:47:38.200785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.091 [2024-04-26 15:47:38.205618] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.091 [2024-04-26 15:47:38.205701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.091 [2024-04-26 15:47:38.205725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.091 [2024-04-26 15:47:38.210580] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.091 [2024-04-26 15:47:38.210695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.091 [2024-04-26 15:47:38.210720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.091 [2024-04-26 15:47:38.215413] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.091 [2024-04-26 15:47:38.215507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.091 [2024-04-26 15:47:38.215531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.091 [2024-04-26 15:47:38.220332] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.091 [2024-04-26 15:47:38.220448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.091 [2024-04-26 15:47:38.220472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.091 [2024-04-26 15:47:38.225180] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.091 [2024-04-26 15:47:38.225265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.091 [2024-04-26 15:47:38.225288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.091 [2024-04-26 15:47:38.230062] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.091 [2024-04-26 15:47:38.230183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.091 [2024-04-26 15:47:38.230207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.091 [2024-04-26 15:47:38.234875] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.091 [2024-04-26 15:47:38.235038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.092 [2024-04-26 15:47:38.235061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.092 [2024-04-26 15:47:38.239781] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.092 [2024-04-26 15:47:38.239869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.092 [2024-04-26 15:47:38.239891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.092 [2024-04-26 15:47:38.244686] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.092 [2024-04-26 15:47:38.244814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.092 [2024-04-26 15:47:38.244838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.092 [2024-04-26 15:47:38.249668] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.092 [2024-04-26 15:47:38.249785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.092 [2024-04-26 15:47:38.249809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.092 [2024-04-26 15:47:38.254535] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.092 [2024-04-26 15:47:38.254612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.092 [2024-04-26 15:47:38.254635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.092 [2024-04-26 15:47:38.259385] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.092 [2024-04-26 15:47:38.259471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.092 [2024-04-26 15:47:38.259494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.092 [2024-04-26 15:47:38.264322] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.092 [2024-04-26 15:47:38.264452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.092 [2024-04-26 15:47:38.264475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.092 [2024-04-26 15:47:38.269197] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.092 [2024-04-26 15:47:38.269287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.092 [2024-04-26 15:47:38.269310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.092 [2024-04-26 15:47:38.274016] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.092 [2024-04-26 15:47:38.274131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.092 [2024-04-26 15:47:38.274169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.092 [2024-04-26 15:47:38.278929] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.092 [2024-04-26 15:47:38.279008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.092 [2024-04-26 15:47:38.279031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.092 [2024-04-26 15:47:38.283739] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.092 [2024-04-26 15:47:38.283846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.092 [2024-04-26 15:47:38.283869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.092 [2024-04-26 15:47:38.288649] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.092 [2024-04-26 15:47:38.288768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.092 [2024-04-26 15:47:38.288790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.092 [2024-04-26 15:47:38.293480] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.092 [2024-04-26 15:47:38.293601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.092 [2024-04-26 15:47:38.293623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.092 [2024-04-26 15:47:38.298384] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.092 [2024-04-26 15:47:38.298464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.092 [2024-04-26 15:47:38.298487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.092 [2024-04-26 15:47:38.303359] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.092 [2024-04-26 15:47:38.303439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.092 [2024-04-26 15:47:38.303462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.092 [2024-04-26 15:47:38.308312] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.092 [2024-04-26 15:47:38.308431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.092 [2024-04-26 15:47:38.308453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.092 [2024-04-26 15:47:38.313158] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.092 [2024-04-26 15:47:38.313267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.092 [2024-04-26 15:47:38.313290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.092 [2024-04-26 15:47:38.318021] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.092 [2024-04-26 15:47:38.318108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.092 [2024-04-26 15:47:38.318130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.092 [2024-04-26 15:47:38.323020] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.092 [2024-04-26 15:47:38.323132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.092 [2024-04-26 15:47:38.323170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.092 [2024-04-26 15:47:38.327979] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.092 [2024-04-26 15:47:38.328054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.092 [2024-04-26 15:47:38.328077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.092 [2024-04-26 15:47:38.332849] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.092 [2024-04-26 15:47:38.332926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.092 [2024-04-26 15:47:38.332949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.092 [2024-04-26 15:47:38.337688] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.092 [2024-04-26 15:47:38.337774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.092 [2024-04-26 15:47:38.337797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.092 [2024-04-26 15:47:38.342563] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.092 [2024-04-26 15:47:38.342678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.092 [2024-04-26 15:47:38.342701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.092 [2024-04-26 15:47:38.347377] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.092 [2024-04-26 15:47:38.347472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.092 [2024-04-26 15:47:38.347495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.092 [2024-04-26 15:47:38.352243] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.092 [2024-04-26 15:47:38.352346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.092 [2024-04-26 15:47:38.352371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.092 [2024-04-26 15:47:38.357201] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.092 [2024-04-26 15:47:38.357311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.092 [2024-04-26 15:47:38.357334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.092 [2024-04-26 15:47:38.362023] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.092 [2024-04-26 15:47:38.362097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.092 [2024-04-26 15:47:38.362120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.092 [2024-04-26 15:47:38.366868] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.092 [2024-04-26 15:47:38.366998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.093 [2024-04-26 15:47:38.367021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.093 [2024-04-26 15:47:38.371742] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.093 [2024-04-26 15:47:38.371836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.093 [2024-04-26 15:47:38.371858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.093 [2024-04-26 15:47:38.376656] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.093 [2024-04-26 15:47:38.376753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.093 [2024-04-26 15:47:38.376775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.093 [2024-04-26 15:47:38.381515] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.093 [2024-04-26 15:47:38.381597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.093 [2024-04-26 15:47:38.381619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.352 [2024-04-26 15:47:38.386374] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.352 [2024-04-26 15:47:38.386448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.352 [2024-04-26 15:47:38.386470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.352 [2024-04-26 15:47:38.391157] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.352 [2024-04-26 15:47:38.391257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.352 [2024-04-26 15:47:38.391280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.352 [2024-04-26 15:47:38.395940] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.352 [2024-04-26 15:47:38.396017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.352 [2024-04-26 15:47:38.396041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.352 [2024-04-26 15:47:38.400844] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.352 [2024-04-26 15:47:38.400919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.352 [2024-04-26 15:47:38.400941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.352 [2024-04-26 15:47:38.405711] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.352 [2024-04-26 15:47:38.405791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.352 [2024-04-26 15:47:38.405815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.352 [2024-04-26 15:47:38.410561] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.352 [2024-04-26 15:47:38.410671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.352 [2024-04-26 15:47:38.410694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.352 [2024-04-26 15:47:38.415463] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.352 [2024-04-26 15:47:38.415575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.352 [2024-04-26 15:47:38.415597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.352 [2024-04-26 15:47:38.420320] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.352 [2024-04-26 15:47:38.420410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.352 [2024-04-26 15:47:38.420433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.352 [2024-04-26 15:47:38.425072] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.352 [2024-04-26 15:47:38.425163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.352 [2024-04-26 15:47:38.425186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.352 [2024-04-26 15:47:38.429843] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.352 [2024-04-26 15:47:38.429915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.352 [2024-04-26 15:47:38.429938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.352 [2024-04-26 15:47:38.434666] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.352 [2024-04-26 15:47:38.434775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.352 [2024-04-26 15:47:38.434797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.352 [2024-04-26 15:47:38.439535] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.352 [2024-04-26 15:47:38.439627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.353 [2024-04-26 15:47:38.439650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.353 [2024-04-26 15:47:38.444395] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.353 [2024-04-26 15:47:38.444501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.353 [2024-04-26 15:47:38.444523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.353 [2024-04-26 15:47:38.449204] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.353 [2024-04-26 15:47:38.449290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.353 [2024-04-26 15:47:38.449312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.353 [2024-04-26 15:47:38.454084] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.353 [2024-04-26 15:47:38.454196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.353 [2024-04-26 15:47:38.454218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.353 [2024-04-26 15:47:38.458984] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.353 [2024-04-26 15:47:38.459075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.353 [2024-04-26 15:47:38.459100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.353 [2024-04-26 15:47:38.463964] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.353 [2024-04-26 15:47:38.464055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.353 [2024-04-26 15:47:38.464079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.353 [2024-04-26 15:47:38.468970] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.353 [2024-04-26 15:47:38.469068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.353 [2024-04-26 15:47:38.469091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.353 [2024-04-26 15:47:38.473892] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.353 [2024-04-26 15:47:38.474002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.353 [2024-04-26 15:47:38.474024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.353 [2024-04-26 15:47:38.478849] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.353 [2024-04-26 15:47:38.478962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.353 [2024-04-26 15:47:38.478985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.353 [2024-04-26 15:47:38.483624] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.353 [2024-04-26 15:47:38.483738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.353 [2024-04-26 15:47:38.483762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.353 [2024-04-26 15:47:38.488535] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.353 [2024-04-26 15:47:38.488627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.353 [2024-04-26 15:47:38.488649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.353 [2024-04-26 15:47:38.493395] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.353 [2024-04-26 15:47:38.493474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.353 [2024-04-26 15:47:38.493497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.353 [2024-04-26 15:47:38.498288] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.353 [2024-04-26 15:47:38.498375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.353 [2024-04-26 15:47:38.498398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.353 [2024-04-26 15:47:38.503115] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.353 [2024-04-26 15:47:38.503241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.353 [2024-04-26 15:47:38.503263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.353 [2024-04-26 15:47:38.507986] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.353 [2024-04-26 15:47:38.508097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.353 [2024-04-26 15:47:38.508120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.353 [2024-04-26 15:47:38.512887] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.353 [2024-04-26 15:47:38.512966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.353 [2024-04-26 15:47:38.512989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.353 [2024-04-26 15:47:38.517855] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.353 [2024-04-26 15:47:38.517967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.353 [2024-04-26 15:47:38.517989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.353 [2024-04-26 15:47:38.522739] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.353 [2024-04-26 15:47:38.522818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.353 [2024-04-26 15:47:38.522841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.353 [2024-04-26 15:47:38.527618] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.353 [2024-04-26 15:47:38.527731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.353 [2024-04-26 15:47:38.527753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.353 [2024-04-26 15:47:38.532536] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.353 [2024-04-26 15:47:38.532647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.353 [2024-04-26 15:47:38.532669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.353 [2024-04-26 15:47:38.537446] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.353 [2024-04-26 15:47:38.537557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.353 [2024-04-26 15:47:38.537580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.353 [2024-04-26 15:47:38.542317] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.353 [2024-04-26 15:47:38.542394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.353 [2024-04-26 15:47:38.542417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.353 [2024-04-26 15:47:38.547126] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.353 [2024-04-26 15:47:38.547317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.353 [2024-04-26 15:47:38.547340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.353 [2024-04-26 15:47:38.552033] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.353 [2024-04-26 15:47:38.552180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.353 [2024-04-26 15:47:38.552204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.353 [2024-04-26 15:47:38.556957] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.353 [2024-04-26 15:47:38.557069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.353 [2024-04-26 15:47:38.557093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.353 [2024-04-26 15:47:38.561846] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.353 [2024-04-26 15:47:38.561944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.353 [2024-04-26 15:47:38.561968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.353 [2024-04-26 15:47:38.566765] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.353 [2024-04-26 15:47:38.566856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.354 [2024-04-26 15:47:38.566880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.354 [2024-04-26 15:47:38.571617] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.354 [2024-04-26 15:47:38.571693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.354 [2024-04-26 15:47:38.571716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.354 [2024-04-26 15:47:38.576485] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.354 [2024-04-26 15:47:38.576599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.354 [2024-04-26 15:47:38.576622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.354 [2024-04-26 15:47:38.581326] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.354 [2024-04-26 15:47:38.581431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.354 [2024-04-26 15:47:38.581454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.354 [2024-04-26 15:47:38.586253] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.354 [2024-04-26 15:47:38.586326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.354 [2024-04-26 15:47:38.586349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.354 [2024-04-26 15:47:38.591052] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.354 [2024-04-26 15:47:38.591153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.354 [2024-04-26 15:47:38.591177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.354 [2024-04-26 15:47:38.595793] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.354 [2024-04-26 15:47:38.595865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.354 [2024-04-26 15:47:38.595889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.354 [2024-04-26 15:47:38.600672] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.354 [2024-04-26 15:47:38.600746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.354 [2024-04-26 15:47:38.600768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.354 [2024-04-26 15:47:38.605453] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.354 [2024-04-26 15:47:38.605529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.354 [2024-04-26 15:47:38.605552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.354 [2024-04-26 15:47:38.610324] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.354 [2024-04-26 15:47:38.610405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.354 [2024-04-26 15:47:38.610428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.354 [2024-04-26 15:47:38.615160] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.354 [2024-04-26 15:47:38.615249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.354 [2024-04-26 15:47:38.615272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.354 [2024-04-26 15:47:38.620018] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.354 [2024-04-26 15:47:38.620129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.354 [2024-04-26 15:47:38.620166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.354 [2024-04-26 15:47:38.624866] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.354 [2024-04-26 15:47:38.624945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.354 [2024-04-26 15:47:38.624968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.354 [2024-04-26 15:47:38.629813] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.354 [2024-04-26 15:47:38.629925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.354 [2024-04-26 15:47:38.629948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.354 [2024-04-26 15:47:38.634698] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.354 [2024-04-26 15:47:38.634790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.354 [2024-04-26 15:47:38.634814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.354 [2024-04-26 15:47:38.639538] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.354 [2024-04-26 15:47:38.639628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.354 [2024-04-26 15:47:38.639651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.354 [2024-04-26 15:47:38.644358] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.354 [2024-04-26 15:47:38.644432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.354 [2024-04-26 15:47:38.644455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.613 [2024-04-26 15:47:38.649263] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.613 [2024-04-26 15:47:38.649350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.613 [2024-04-26 15:47:38.649373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.613 [2024-04-26 15:47:38.654186] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.613 [2024-04-26 15:47:38.654276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.613 [2024-04-26 15:47:38.654299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.613 [2024-04-26 15:47:38.659044] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.613 [2024-04-26 15:47:38.659128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.613 [2024-04-26 15:47:38.659165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.613 [2024-04-26 15:47:38.663948] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.613 [2024-04-26 15:47:38.664070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.613 [2024-04-26 15:47:38.664092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.613 [2024-04-26 15:47:38.668899] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.613 [2024-04-26 15:47:38.669013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.613 [2024-04-26 15:47:38.669036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.613 [2024-04-26 15:47:38.673849] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.613 [2024-04-26 15:47:38.673959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.613 [2024-04-26 15:47:38.673982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.613 [2024-04-26 15:47:38.678877] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.613 [2024-04-26 15:47:38.678988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.613 [2024-04-26 15:47:38.679010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.613 [2024-04-26 15:47:38.683763] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.613 [2024-04-26 15:47:38.683844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.613 [2024-04-26 15:47:38.683867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.613 [2024-04-26 15:47:38.688616] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.613 [2024-04-26 15:47:38.688695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.613 [2024-04-26 15:47:38.688719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.613 [2024-04-26 15:47:38.693461] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.613 [2024-04-26 15:47:38.693538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.613 [2024-04-26 15:47:38.693561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.613 [2024-04-26 15:47:38.698336] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.613 [2024-04-26 15:47:38.698417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.613 [2024-04-26 15:47:38.698439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.613 [2024-04-26 15:47:38.703258] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.613 [2024-04-26 15:47:38.703353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.613 [2024-04-26 15:47:38.703376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.613 [2024-04-26 15:47:38.708104] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.613 [2024-04-26 15:47:38.708248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.613 [2024-04-26 15:47:38.708271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.613 [2024-04-26 15:47:38.713031] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.613 [2024-04-26 15:47:38.713144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.613 [2024-04-26 15:47:38.713181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.613 [2024-04-26 15:47:38.717912] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.613 [2024-04-26 15:47:38.718026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.613 [2024-04-26 15:47:38.718049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.613 [2024-04-26 15:47:38.722757] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.613 [2024-04-26 15:47:38.722872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.613 [2024-04-26 15:47:38.722894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.613 [2024-04-26 15:47:38.727677] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.613 [2024-04-26 15:47:38.727753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.613 [2024-04-26 15:47:38.727776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.613 [2024-04-26 15:47:38.732621] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.613 [2024-04-26 15:47:38.732736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.613 [2024-04-26 15:47:38.732760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.613 [2024-04-26 15:47:38.737591] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.613 [2024-04-26 15:47:38.737674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.613 [2024-04-26 15:47:38.737696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.613 [2024-04-26 15:47:38.742564] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.613 [2024-04-26 15:47:38.742664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.613 [2024-04-26 15:47:38.742687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.613 [2024-04-26 15:47:38.747441] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.613 [2024-04-26 15:47:38.747519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.613 [2024-04-26 15:47:38.747542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.613 [2024-04-26 15:47:38.752381] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.613 [2024-04-26 15:47:38.752496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.613 [2024-04-26 15:47:38.752518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.613 [2024-04-26 15:47:38.757342] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.613 [2024-04-26 15:47:38.757418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.613 [2024-04-26 15:47:38.757441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.613 [2024-04-26 15:47:38.762205] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.613 [2024-04-26 15:47:38.762297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.613 [2024-04-26 15:47:38.762320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.613 [2024-04-26 15:47:38.767086] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.613 [2024-04-26 15:47:38.767236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.613 [2024-04-26 15:47:38.767259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.613 [2024-04-26 15:47:38.772067] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.613 [2024-04-26 15:47:38.772179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.614 [2024-04-26 15:47:38.772202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.614 [2024-04-26 15:47:38.776980] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.614 [2024-04-26 15:47:38.777069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.614 [2024-04-26 15:47:38.777091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.614 [2024-04-26 15:47:38.781883] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.614 [2024-04-26 15:47:38.781959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.614 [2024-04-26 15:47:38.781982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.614 [2024-04-26 15:47:38.786970] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.614 [2024-04-26 15:47:38.787102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.614 [2024-04-26 15:47:38.787124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.614 [2024-04-26 15:47:38.791911] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.614 [2024-04-26 15:47:38.792043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.614 [2024-04-26 15:47:38.792066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.614 [2024-04-26 15:47:38.796857] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.614 [2024-04-26 15:47:38.796972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.614 [2024-04-26 15:47:38.796995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.614 [2024-04-26 15:47:38.801714] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.614 [2024-04-26 15:47:38.801792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.614 [2024-04-26 15:47:38.801817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.614 [2024-04-26 15:47:38.806566] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.614 [2024-04-26 15:47:38.806678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.614 [2024-04-26 15:47:38.806701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.614 [2024-04-26 15:47:38.811440] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.614 [2024-04-26 15:47:38.811517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.614 [2024-04-26 15:47:38.811540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.614 [2024-04-26 15:47:38.816278] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.614 [2024-04-26 15:47:38.816366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.614 [2024-04-26 15:47:38.816389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.614 [2024-04-26 15:47:38.821165] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.614 [2024-04-26 15:47:38.821247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.614 [2024-04-26 15:47:38.821270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.614 [2024-04-26 15:47:38.826048] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.614 [2024-04-26 15:47:38.826155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.614 [2024-04-26 15:47:38.826178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.614 [2024-04-26 15:47:38.830882] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.614 [2024-04-26 15:47:38.831010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.614 [2024-04-26 15:47:38.831032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.614 [2024-04-26 15:47:38.835681] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.614 [2024-04-26 15:47:38.835777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.614 [2024-04-26 15:47:38.835800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.614 [2024-04-26 15:47:38.840470] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.614 [2024-04-26 15:47:38.840611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.614 [2024-04-26 15:47:38.840633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.614 [2024-04-26 15:47:38.845339] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.614 [2024-04-26 15:47:38.845451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.614 [2024-04-26 15:47:38.845475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.614 [2024-04-26 15:47:38.850164] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.614 [2024-04-26 15:47:38.850241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.614 [2024-04-26 15:47:38.850264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.614 [2024-04-26 15:47:38.854977] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.614 [2024-04-26 15:47:38.855061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.614 [2024-04-26 15:47:38.855084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.614 [2024-04-26 15:47:38.859821] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.614 [2024-04-26 15:47:38.859916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.614 [2024-04-26 15:47:38.859939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.614 [2024-04-26 15:47:38.864634] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.614 [2024-04-26 15:47:38.864753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.614 [2024-04-26 15:47:38.864776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.614 [2024-04-26 15:47:38.869462] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.614 [2024-04-26 15:47:38.869536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.614 [2024-04-26 15:47:38.869559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.614 [2024-04-26 15:47:38.874284] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.614 [2024-04-26 15:47:38.874388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.614 [2024-04-26 15:47:38.874412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.614 [2024-04-26 15:47:38.879186] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.614 [2024-04-26 15:47:38.879264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.614 [2024-04-26 15:47:38.879287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.614 [2024-04-26 15:47:38.884081] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.614 [2024-04-26 15:47:38.884193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.614 [2024-04-26 15:47:38.884217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.614 [2024-04-26 15:47:38.889062] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.614 [2024-04-26 15:47:38.889151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.614 [2024-04-26 15:47:38.889176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.614 [2024-04-26 15:47:38.893994] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.614 [2024-04-26 15:47:38.894089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.614 [2024-04-26 15:47:38.894112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.614 [2024-04-26 15:47:38.898832] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.614 [2024-04-26 15:47:38.898910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.614 [2024-04-26 15:47:38.898933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.615 [2024-04-26 15:47:38.903758] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.615 [2024-04-26 15:47:38.903845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.615 [2024-04-26 15:47:38.903869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.874 [2024-04-26 15:47:38.908662] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.874 [2024-04-26 15:47:38.908737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.874 [2024-04-26 15:47:38.908760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.874 [2024-04-26 15:47:38.913553] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.874 [2024-04-26 15:47:38.913635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.874 [2024-04-26 15:47:38.913658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.874 [2024-04-26 15:47:38.918547] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.874 [2024-04-26 15:47:38.918669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.874 [2024-04-26 15:47:38.918692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.874 [2024-04-26 15:47:38.923536] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.874 [2024-04-26 15:47:38.923635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.874 [2024-04-26 15:47:38.923657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.874 [2024-04-26 15:47:38.928389] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.874 [2024-04-26 15:47:38.928483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.874 [2024-04-26 15:47:38.928505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.874 [2024-04-26 15:47:38.933368] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.874 [2024-04-26 15:47:38.933440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.874 [2024-04-26 15:47:38.933462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.874 [2024-04-26 15:47:38.938282] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.874 [2024-04-26 15:47:38.938409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.874 [2024-04-26 15:47:38.938432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.874 [2024-04-26 15:47:38.943214] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.874 [2024-04-26 15:47:38.943292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.874 [2024-04-26 15:47:38.943316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.874 [2024-04-26 15:47:38.948162] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.874 [2024-04-26 15:47:38.948256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.874 [2024-04-26 15:47:38.948279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.874 [2024-04-26 15:47:38.953152] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.874 [2024-04-26 15:47:38.953250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.874 [2024-04-26 15:47:38.953273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.874 [2024-04-26 15:47:38.958131] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.874 [2024-04-26 15:47:38.958231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.874 [2024-04-26 15:47:38.958253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.874 [2024-04-26 15:47:38.962988] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.874 [2024-04-26 15:47:38.963188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.874 [2024-04-26 15:47:38.963210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.874 [2024-04-26 15:47:38.967995] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.874 [2024-04-26 15:47:38.968092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.874 [2024-04-26 15:47:38.968115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.874 [2024-04-26 15:47:38.972940] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.874 [2024-04-26 15:47:38.973040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.874 [2024-04-26 15:47:38.973063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.874 [2024-04-26 15:47:38.977926] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.874 [2024-04-26 15:47:38.978048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.874 [2024-04-26 15:47:38.978072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.874 [2024-04-26 15:47:38.982839] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.874 [2024-04-26 15:47:38.982950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.874 [2024-04-26 15:47:38.982972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.874 [2024-04-26 15:47:38.987837] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.874 [2024-04-26 15:47:38.987951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.875 [2024-04-26 15:47:38.987973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.875 [2024-04-26 15:47:38.992776] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.875 [2024-04-26 15:47:38.992862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.875 [2024-04-26 15:47:38.992884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.875 [2024-04-26 15:47:38.997705] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.875 [2024-04-26 15:47:38.997792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.875 [2024-04-26 15:47:38.997816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.875 [2024-04-26 15:47:39.002554] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.875 [2024-04-26 15:47:39.002674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.875 [2024-04-26 15:47:39.002697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.875 [2024-04-26 15:47:39.007486] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.875 [2024-04-26 15:47:39.007600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.875 [2024-04-26 15:47:39.007622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.875 [2024-04-26 15:47:39.012452] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.875 [2024-04-26 15:47:39.012533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.875 [2024-04-26 15:47:39.012555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.875 [2024-04-26 15:47:39.017311] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.875 [2024-04-26 15:47:39.017405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.875 [2024-04-26 15:47:39.017428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.875 [2024-04-26 15:47:39.022318] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.875 [2024-04-26 15:47:39.022436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.875 [2024-04-26 15:47:39.022459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.875 [2024-04-26 15:47:39.027299] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.875 [2024-04-26 15:47:39.027380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.875 [2024-04-26 15:47:39.027402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.875 [2024-04-26 15:47:39.032225] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.875 [2024-04-26 15:47:39.032313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.875 [2024-04-26 15:47:39.032348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.875 [2024-04-26 15:47:39.037119] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.875 [2024-04-26 15:47:39.037220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.875 [2024-04-26 15:47:39.037242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.875 [2024-04-26 15:47:39.041995] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.875 [2024-04-26 15:47:39.042103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.875 [2024-04-26 15:47:39.042125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.875 [2024-04-26 15:47:39.046829] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.875 [2024-04-26 15:47:39.046944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.875 [2024-04-26 15:47:39.046966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.875 [2024-04-26 15:47:39.051777] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.875 [2024-04-26 15:47:39.051861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.875 [2024-04-26 15:47:39.051885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.875 [2024-04-26 15:47:39.056654] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.875 [2024-04-26 15:47:39.056752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.875 [2024-04-26 15:47:39.056775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.875 [2024-04-26 15:47:39.061528] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.875 [2024-04-26 15:47:39.061622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.875 [2024-04-26 15:47:39.061645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.875 [2024-04-26 15:47:39.066476] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.875 [2024-04-26 15:47:39.066552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.875 [2024-04-26 15:47:39.066574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.875 [2024-04-26 15:47:39.071377] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.875 [2024-04-26 15:47:39.071453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.875 [2024-04-26 15:47:39.071476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.875 [2024-04-26 15:47:39.076311] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.875 [2024-04-26 15:47:39.076412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.875 [2024-04-26 15:47:39.076434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.875 [2024-04-26 15:47:39.081276] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.875 [2024-04-26 15:47:39.081446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.875 [2024-04-26 15:47:39.081468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.875 [2024-04-26 15:47:39.086208] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.875 [2024-04-26 15:47:39.086354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.875 [2024-04-26 15:47:39.086376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.875 [2024-04-26 15:47:39.091102] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.875 [2024-04-26 15:47:39.091216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.875 [2024-04-26 15:47:39.091239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.875 [2024-04-26 15:47:39.096035] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.875 [2024-04-26 15:47:39.096162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.875 [2024-04-26 15:47:39.096199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.875 [2024-04-26 15:47:39.100975] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.875 [2024-04-26 15:47:39.101061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.875 [2024-04-26 15:47:39.101084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.875 [2024-04-26 15:47:39.105991] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.875 [2024-04-26 15:47:39.106083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.875 [2024-04-26 15:47:39.106106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.875 [2024-04-26 15:47:39.110906] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.875 [2024-04-26 15:47:39.111001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.875 [2024-04-26 15:47:39.111024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.875 [2024-04-26 15:47:39.115885] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.875 [2024-04-26 15:47:39.115975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.875 [2024-04-26 15:47:39.115997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.875 [2024-04-26 15:47:39.120740] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.875 [2024-04-26 15:47:39.120849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.876 [2024-04-26 15:47:39.120871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.876 [2024-04-26 15:47:39.125599] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.876 [2024-04-26 15:47:39.125685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.876 [2024-04-26 15:47:39.125708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.876 [2024-04-26 15:47:39.130478] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.876 [2024-04-26 15:47:39.130588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.876 [2024-04-26 15:47:39.130610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.876 [2024-04-26 15:47:39.135343] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.876 [2024-04-26 15:47:39.135474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.876 [2024-04-26 15:47:39.135496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.876 [2024-04-26 15:47:39.140206] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.876 [2024-04-26 15:47:39.140316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.876 [2024-04-26 15:47:39.140352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.876 [2024-04-26 15:47:39.145022] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.876 [2024-04-26 15:47:39.145149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.876 [2024-04-26 15:47:39.145173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:08.876 [2024-04-26 15:47:39.149886] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.876 [2024-04-26 15:47:39.149971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.876 [2024-04-26 15:47:39.149994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:08.876 [2024-04-26 15:47:39.154774] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.876 [2024-04-26 15:47:39.154884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.876 [2024-04-26 15:47:39.154906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:08.876 [2024-04-26 15:47:39.159628] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.876 [2024-04-26 15:47:39.159707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.876 [2024-04-26 15:47:39.159731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:08.876 [2024-04-26 15:47:39.164547] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:08.876 [2024-04-26 15:47:39.164647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:08.876 [2024-04-26 15:47:39.164671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.136 [2024-04-26 15:47:39.169456] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:09.136 [2024-04-26 15:47:39.169646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.136 [2024-04-26 15:47:39.169670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:09.136 [2024-04-26 15:47:39.174318] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:09.136 [2024-04-26 15:47:39.174437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.136 [2024-04-26 15:47:39.174460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:09.136 [2024-04-26 15:47:39.179249] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:09.136 [2024-04-26 15:47:39.179362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.136 [2024-04-26 15:47:39.179385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:09.136 [2024-04-26 15:47:39.184168] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:09.136 [2024-04-26 15:47:39.184277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.136 [2024-04-26 15:47:39.184300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.136 [2024-04-26 15:47:39.189097] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:09.136 [2024-04-26 15:47:39.189191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.136 [2024-04-26 15:47:39.189214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:09.136 [2024-04-26 15:47:39.194038] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:09.136 [2024-04-26 15:47:39.194169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.136 [2024-04-26 15:47:39.194193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:09.136 [2024-04-26 15:47:39.198854] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:09.136 [2024-04-26 15:47:39.198931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.136 [2024-04-26 15:47:39.198955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:09.136 [2024-04-26 15:47:39.203648] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:09.136 [2024-04-26 15:47:39.203727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.136 [2024-04-26 15:47:39.203750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.136 [2024-04-26 15:47:39.208548] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:09.136 [2024-04-26 15:47:39.208627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.136 [2024-04-26 15:47:39.208650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:09.136 [2024-04-26 15:47:39.213428] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:09.136 [2024-04-26 15:47:39.213511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.136 [2024-04-26 15:47:39.213534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:09.136 [2024-04-26 15:47:39.218328] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:09.136 [2024-04-26 15:47:39.218406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.136 [2024-04-26 15:47:39.218428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:09.136 [2024-04-26 15:47:39.223108] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:09.136 [2024-04-26 15:47:39.223238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.136 [2024-04-26 15:47:39.223262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.136 [2024-04-26 15:47:39.227954] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:09.136 [2024-04-26 15:47:39.228052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.136 [2024-04-26 15:47:39.228074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:09.136 [2024-04-26 15:47:39.232782] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:09.136 [2024-04-26 15:47:39.232854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.136 [2024-04-26 15:47:39.232877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:09.136 [2024-04-26 15:47:39.237667] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:09.136 [2024-04-26 15:47:39.237740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.136 [2024-04-26 15:47:39.237763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:09.136 [2024-04-26 15:47:39.242547] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:09.136 [2024-04-26 15:47:39.242658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.136 [2024-04-26 15:47:39.242681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.136 [2024-04-26 15:47:39.247426] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:09.136 [2024-04-26 15:47:39.247538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.136 [2024-04-26 15:47:39.247560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:09.136 [2024-04-26 15:47:39.252195] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:09.136 [2024-04-26 15:47:39.252269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.136 [2024-04-26 15:47:39.252292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:09.136 [2024-04-26 15:47:39.257059] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:09.136 [2024-04-26 15:47:39.257152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.136 [2024-04-26 15:47:39.257175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:09.136 [2024-04-26 15:47:39.261875] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:09.136 [2024-04-26 15:47:39.261953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.136 [2024-04-26 15:47:39.261976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.136 [2024-04-26 15:47:39.266785] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:09.136 [2024-04-26 15:47:39.266863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.137 [2024-04-26 15:47:39.266886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:09.137 [2024-04-26 15:47:39.271816] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:09.137 [2024-04-26 15:47:39.271932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.137 [2024-04-26 15:47:39.271956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:09.137 [2024-04-26 15:47:39.276703] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:09.137 [2024-04-26 15:47:39.276795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.137 [2024-04-26 15:47:39.276817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:09.137 [2024-04-26 15:47:39.281582] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:09.137 [2024-04-26 15:47:39.281675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.137 [2024-04-26 15:47:39.281697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.137 [2024-04-26 15:47:39.286499] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:09.137 [2024-04-26 15:47:39.286596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.137 [2024-04-26 15:47:39.286619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:09.137 [2024-04-26 15:47:39.291414] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:09.137 [2024-04-26 15:47:39.291509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.137 [2024-04-26 15:47:39.291531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:09.137 [2024-04-26 15:47:39.296346] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:09.137 [2024-04-26 15:47:39.296459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.137 [2024-04-26 15:47:39.296489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:09.137 [2024-04-26 15:47:39.301272] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:09.137 [2024-04-26 15:47:39.301347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.137 [2024-04-26 15:47:39.301370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.137 [2024-04-26 15:47:39.306272] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:09.137 [2024-04-26 15:47:39.306356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.137 [2024-04-26 15:47:39.306380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:09.137 [2024-04-26 15:47:39.311099] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:09.137 [2024-04-26 15:47:39.311209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.137 [2024-04-26 15:47:39.311233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:09.137 [2024-04-26 15:47:39.315993] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:09.137 [2024-04-26 15:47:39.316105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.137 [2024-04-26 15:47:39.316128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:09.137 [2024-04-26 15:47:39.320852] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:09.137 [2024-04-26 15:47:39.320951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.137 [2024-04-26 15:47:39.320974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.137 [2024-04-26 15:47:39.325843] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:09.137 [2024-04-26 15:47:39.325927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.137 [2024-04-26 15:47:39.325952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:09.137 [2024-04-26 15:47:39.330704] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:09.137 [2024-04-26 15:47:39.330785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.137 [2024-04-26 15:47:39.330807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:09.137 [2024-04-26 15:47:39.335576] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:09.137 [2024-04-26 15:47:39.335656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.137 [2024-04-26 15:47:39.335680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:09.137 [2024-04-26 15:47:39.340528] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:09.137 [2024-04-26 15:47:39.340659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.137 [2024-04-26 15:47:39.340683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.137 [2024-04-26 15:47:39.345402] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:09.137 [2024-04-26 15:47:39.345480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.137 [2024-04-26 15:47:39.345504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:09.137 [2024-04-26 15:47:39.350377] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:09.137 [2024-04-26 15:47:39.350479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.137 [2024-04-26 15:47:39.350501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:09.137 [2024-04-26 15:47:39.355192] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:09.137 [2024-04-26 15:47:39.355292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.137 [2024-04-26 15:47:39.355314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:09.137 [2024-04-26 15:47:39.359989] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:09.137 [2024-04-26 15:47:39.360116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.137 [2024-04-26 15:47:39.360151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.137 [2024-04-26 15:47:39.364806] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:09.137 [2024-04-26 15:47:39.364913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.137 [2024-04-26 15:47:39.364936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:09.137 [2024-04-26 15:47:39.369700] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:09.137 [2024-04-26 15:47:39.369784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.137 [2024-04-26 15:47:39.369808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:09.137 [2024-04-26 15:47:39.374547] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:09.137 [2024-04-26 15:47:39.374626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.137 [2024-04-26 15:47:39.374649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:09.137 [2024-04-26 15:47:39.379517] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:09.137 [2024-04-26 15:47:39.379613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.137 [2024-04-26 15:47:39.379637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.137 [2024-04-26 15:47:39.384406] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:09.137 [2024-04-26 15:47:39.384490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.137 [2024-04-26 15:47:39.384513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:09.137 [2024-04-26 15:47:39.389250] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:09.137 [2024-04-26 15:47:39.389327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.137 [2024-04-26 15:47:39.389350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:09.137 [2024-04-26 15:47:39.394260] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:09.137 [2024-04-26 15:47:39.394339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.137 [2024-04-26 15:47:39.394364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:09.137 [2024-04-26 15:47:39.399044] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:09.137 [2024-04-26 15:47:39.399128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.138 [2024-04-26 15:47:39.399171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.138 [2024-04-26 15:47:39.403933] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:09.138 [2024-04-26 15:47:39.404026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.138 [2024-04-26 15:47:39.404049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:09.138 [2024-04-26 15:47:39.408834] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:09.138 [2024-04-26 15:47:39.408947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.138 [2024-04-26 15:47:39.408970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:09.138 [2024-04-26 15:47:39.413696] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:09.138 [2024-04-26 15:47:39.413800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.138 [2024-04-26 15:47:39.413823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:09.138 [2024-04-26 15:47:39.418604] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:09.138 [2024-04-26 15:47:39.418733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.138 [2024-04-26 15:47:39.418756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.138 [2024-04-26 15:47:39.423415] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:09.138 [2024-04-26 15:47:39.423523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.138 [2024-04-26 15:47:39.423546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:09.396 [2024-04-26 15:47:39.428245] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:09.396 [2024-04-26 15:47:39.428320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.396 [2024-04-26 15:47:39.428356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:09.396 [2024-04-26 15:47:39.433099] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:09.396 [2024-04-26 15:47:39.433188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.396 [2024-04-26 15:47:39.433210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:09.396 [2024-04-26 15:47:39.438024] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:09.396 [2024-04-26 15:47:39.438148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.396 [2024-04-26 15:47:39.438171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.396 [2024-04-26 15:47:39.442884] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:09.396 [2024-04-26 15:47:39.442978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.396 [2024-04-26 15:47:39.443001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:09.396 [2024-04-26 15:47:39.447765] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:09.396 [2024-04-26 15:47:39.447857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.396 [2024-04-26 15:47:39.447880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:09.396 [2024-04-26 15:47:39.452648] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:09.396 [2024-04-26 15:47:39.452758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.396 [2024-04-26 15:47:39.452781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:29:09.396 [2024-04-26 15:47:39.457510] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:09.396 [2024-04-26 15:47:39.457593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.396 [2024-04-26 15:47:39.457616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:09.396 [2024-04-26 15:47:39.462422] tcp.c:2053:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d1bca0) with pdu=0x2000190fef90 00:29:09.396 [2024-04-26 15:47:39.462579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:09.396 [2024-04-26 15:47:39.462601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:09.396 00:29:09.396 Latency(us) 00:29:09.397 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:09.397 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:09.397 nvme0n1 : 2.00 6158.62 769.83 0.00 0.00 2592.32 1980.97 11677.32 00:29:09.397 =================================================================================================================== 00:29:09.397 Total : 6158.62 769.83 0.00 0.00 2592.32 1980.97 11677.32 00:29:09.397 0 00:29:09.397 15:47:39 -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:09.397 15:47:39 -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:09.397 | .driver_specific 00:29:09.397 | .nvme_error 00:29:09.397 | .status_code 00:29:09.397 | .command_transient_transport_error' 00:29:09.397 15:47:39 -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:09.397 15:47:39 -- host/digest.sh@18 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:09.655 15:47:39 -- host/digest.sh@71 -- # (( 397 > 0 )) 00:29:09.655 15:47:39 -- host/digest.sh@73 -- # killprocess 86118 00:29:09.655 15:47:39 -- common/autotest_common.sh@936 -- # '[' -z 86118 ']' 00:29:09.655 15:47:39 -- common/autotest_common.sh@940 -- # kill -0 86118 00:29:09.655 15:47:39 -- common/autotest_common.sh@941 -- # uname 00:29:09.655 15:47:39 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:09.655 15:47:39 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86118 00:29:09.655 15:47:39 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:09.655 15:47:39 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:09.655 killing process with pid 86118 00:29:09.655 15:47:39 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86118' 00:29:09.655 15:47:39 -- common/autotest_common.sh@955 -- # kill 86118 00:29:09.655 Received shutdown signal, test time was about 2.000000 seconds 00:29:09.655 00:29:09.655 Latency(us) 00:29:09.655 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:09.655 =================================================================================================================== 00:29:09.655 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:09.655 15:47:39 -- common/autotest_common.sh@960 -- # wait 86118 00:29:09.937 15:47:40 -- host/digest.sh@116 -- # killprocess 85803 00:29:09.937 15:47:40 -- common/autotest_common.sh@936 -- # '[' -z 85803 ']' 00:29:09.937 15:47:40 -- common/autotest_common.sh@940 -- # kill -0 85803 00:29:09.937 15:47:40 -- common/autotest_common.sh@941 -- # uname 00:29:09.937 15:47:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:09.937 15:47:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 85803 00:29:09.937 15:47:40 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:29:09.937 15:47:40 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:29:09.937 killing process with pid 85803 00:29:09.937 15:47:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 85803' 00:29:09.937 15:47:40 -- common/autotest_common.sh@955 -- # kill 85803 00:29:09.937 15:47:40 -- common/autotest_common.sh@960 -- # wait 85803 00:29:10.197 00:29:10.197 real 0m19.226s 00:29:10.197 user 0m36.432s 00:29:10.197 sys 0m5.217s 00:29:10.197 15:47:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:10.197 15:47:40 -- common/autotest_common.sh@10 -- # set +x 00:29:10.197 ************************************ 00:29:10.197 END TEST nvmf_digest_error 00:29:10.197 ************************************ 00:29:10.197 15:47:40 -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:29:10.197 15:47:40 -- host/digest.sh@150 -- # nvmftestfini 00:29:10.197 15:47:40 -- nvmf/common.sh@477 -- # nvmfcleanup 00:29:10.197 15:47:40 -- nvmf/common.sh@117 -- # sync 00:29:10.455 15:47:40 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:10.455 15:47:40 -- nvmf/common.sh@120 -- # set +e 00:29:10.455 15:47:40 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:10.455 15:47:40 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:10.455 rmmod nvme_tcp 00:29:10.455 rmmod nvme_fabrics 00:29:10.455 rmmod nvme_keyring 00:29:10.455 15:47:40 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:10.455 15:47:40 -- nvmf/common.sh@124 -- # set -e 00:29:10.455 15:47:40 -- nvmf/common.sh@125 -- # return 0 00:29:10.455 15:47:40 -- nvmf/common.sh@478 -- # '[' -n 85803 ']' 00:29:10.455 15:47:40 -- nvmf/common.sh@479 -- # killprocess 85803 00:29:10.455 15:47:40 -- common/autotest_common.sh@936 -- # '[' -z 85803 ']' 00:29:10.455 15:47:40 -- common/autotest_common.sh@940 -- # kill -0 85803 00:29:10.455 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (85803) - No such process 00:29:10.455 Process with pid 85803 is not found 00:29:10.455 15:47:40 -- common/autotest_common.sh@963 -- # echo 'Process with pid 85803 is not found' 00:29:10.455 15:47:40 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:29:10.455 15:47:40 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:29:10.455 15:47:40 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:29:10.455 15:47:40 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:10.455 15:47:40 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:10.455 15:47:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:10.455 15:47:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:10.455 15:47:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:10.455 15:47:40 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:29:10.455 ************************************ 00:29:10.455 END TEST nvmf_digest 00:29:10.455 ************************************ 00:29:10.455 00:29:10.455 real 0m39.999s 00:29:10.455 user 1m14.948s 00:29:10.455 sys 0m10.408s 00:29:10.455 15:47:40 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:10.455 15:47:40 -- common/autotest_common.sh@10 -- # set +x 00:29:10.455 15:47:40 -- nvmf/nvmf.sh@108 -- # [[ 1 -eq 1 ]] 00:29:10.455 15:47:40 -- nvmf/nvmf.sh@108 -- # [[ tcp == \t\c\p ]] 00:29:10.455 15:47:40 -- nvmf/nvmf.sh@110 -- # run_test nvmf_mdns_discovery /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:29:10.455 15:47:40 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:10.455 15:47:40 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:10.455 15:47:40 -- common/autotest_common.sh@10 -- # set +x 00:29:10.455 ************************************ 00:29:10.455 START TEST nvmf_mdns_discovery 00:29:10.455 ************************************ 00:29:10.455 15:47:40 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/mdns_discovery.sh --transport=tcp 00:29:10.713 * Looking for test storage... 00:29:10.713 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:29:10.713 15:47:40 -- host/mdns_discovery.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:10.713 15:47:40 -- nvmf/common.sh@7 -- # uname -s 00:29:10.713 15:47:40 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:10.713 15:47:40 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:10.713 15:47:40 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:10.713 15:47:40 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:10.713 15:47:40 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:10.713 15:47:40 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:10.713 15:47:40 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:10.713 15:47:40 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:10.713 15:47:40 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:10.713 15:47:40 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:10.713 15:47:40 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:29:10.713 15:47:40 -- nvmf/common.sh@18 -- # NVME_HOSTID=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:29:10.713 15:47:40 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:10.713 15:47:40 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:10.713 15:47:40 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:10.713 15:47:40 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:10.713 15:47:40 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:10.713 15:47:40 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:10.713 15:47:40 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:10.713 15:47:40 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:10.713 15:47:40 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.713 15:47:40 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.713 15:47:40 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.713 15:47:40 -- paths/export.sh@5 -- # export PATH 00:29:10.713 15:47:40 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:10.713 15:47:40 -- nvmf/common.sh@47 -- # : 0 00:29:10.713 15:47:40 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:10.713 15:47:40 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:10.713 15:47:40 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:10.713 15:47:40 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:10.713 15:47:40 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:10.713 15:47:40 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:10.713 15:47:40 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:10.713 15:47:40 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:10.713 15:47:40 -- host/mdns_discovery.sh@12 -- # DISCOVERY_FILTER=address 00:29:10.713 15:47:40 -- host/mdns_discovery.sh@13 -- # DISCOVERY_PORT=8009 00:29:10.713 15:47:40 -- host/mdns_discovery.sh@14 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:29:10.713 15:47:40 -- host/mdns_discovery.sh@17 -- # NQN=nqn.2016-06.io.spdk:cnode 00:29:10.713 15:47:40 -- host/mdns_discovery.sh@18 -- # NQN2=nqn.2016-06.io.spdk:cnode2 00:29:10.713 15:47:40 -- host/mdns_discovery.sh@20 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:29:10.713 15:47:40 -- host/mdns_discovery.sh@21 -- # HOST_SOCK=/tmp/host.sock 00:29:10.713 15:47:40 -- host/mdns_discovery.sh@23 -- # nvmftestinit 00:29:10.713 15:47:40 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:29:10.713 15:47:40 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:10.713 15:47:40 -- nvmf/common.sh@437 -- # prepare_net_devs 00:29:10.713 15:47:40 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:29:10.713 15:47:40 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:29:10.713 15:47:40 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:10.713 15:47:40 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:10.713 15:47:40 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:10.713 15:47:40 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:29:10.713 15:47:40 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:29:10.713 15:47:40 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:29:10.713 15:47:40 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:29:10.713 15:47:40 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:29:10.713 15:47:40 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:29:10.713 15:47:40 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:10.713 15:47:40 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:10.713 15:47:40 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:29:10.713 15:47:40 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:29:10.713 15:47:40 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:10.713 15:47:40 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:10.713 15:47:40 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:10.713 15:47:40 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:10.713 15:47:40 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:10.713 15:47:40 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:10.713 15:47:40 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:10.713 15:47:40 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:10.713 15:47:40 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:29:10.713 15:47:40 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:29:10.713 Cannot find device "nvmf_tgt_br" 00:29:10.713 15:47:40 -- nvmf/common.sh@155 -- # true 00:29:10.713 15:47:40 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:29:10.713 Cannot find device "nvmf_tgt_br2" 00:29:10.713 15:47:40 -- nvmf/common.sh@156 -- # true 00:29:10.713 15:47:40 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:29:10.713 15:47:40 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:29:10.713 Cannot find device "nvmf_tgt_br" 00:29:10.713 15:47:40 -- nvmf/common.sh@158 -- # true 00:29:10.713 15:47:40 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:29:10.713 Cannot find device "nvmf_tgt_br2" 00:29:10.713 15:47:40 -- nvmf/common.sh@159 -- # true 00:29:10.713 15:47:40 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:29:10.713 15:47:40 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:29:10.713 15:47:40 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:10.713 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:10.713 15:47:40 -- nvmf/common.sh@162 -- # true 00:29:10.713 15:47:40 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:10.713 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:10.713 15:47:40 -- nvmf/common.sh@163 -- # true 00:29:10.713 15:47:40 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:29:10.713 15:47:41 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:10.971 15:47:41 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:10.971 15:47:41 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:10.971 15:47:41 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:10.971 15:47:41 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:10.971 15:47:41 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:10.971 15:47:41 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:29:10.971 15:47:41 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:29:10.971 15:47:41 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:29:10.971 15:47:41 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:29:10.971 15:47:41 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:29:10.971 15:47:41 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:29:10.971 15:47:41 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:10.971 15:47:41 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:10.971 15:47:41 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:10.971 15:47:41 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:29:10.971 15:47:41 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:29:10.971 15:47:41 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:29:10.971 15:47:41 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:10.971 15:47:41 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:10.971 15:47:41 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:10.971 15:47:41 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:10.971 15:47:41 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:29:10.971 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:10.971 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.094 ms 00:29:10.971 00:29:10.971 --- 10.0.0.2 ping statistics --- 00:29:10.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:10.971 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:29:10.971 15:47:41 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:29:10.971 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:10.971 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.103 ms 00:29:10.971 00:29:10.971 --- 10.0.0.3 ping statistics --- 00:29:10.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:10.971 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:29:10.971 15:47:41 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:10.971 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:10.971 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.022 ms 00:29:10.971 00:29:10.971 --- 10.0.0.1 ping statistics --- 00:29:10.971 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:10.971 rtt min/avg/max/mdev = 0.022/0.022/0.022/0.000 ms 00:29:10.971 15:47:41 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:10.971 15:47:41 -- nvmf/common.sh@422 -- # return 0 00:29:10.971 15:47:41 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:29:10.971 15:47:41 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:10.971 15:47:41 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:29:10.971 15:47:41 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:29:10.971 15:47:41 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:10.971 15:47:41 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:29:10.971 15:47:41 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:29:10.971 15:47:41 -- host/mdns_discovery.sh@28 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:29:10.971 15:47:41 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:29:10.971 15:47:41 -- common/autotest_common.sh@710 -- # xtrace_disable 00:29:10.971 15:47:41 -- common/autotest_common.sh@10 -- # set +x 00:29:10.971 15:47:41 -- nvmf/common.sh@470 -- # nvmfpid=86418 00:29:10.971 15:47:41 -- nvmf/common.sh@471 -- # waitforlisten 86418 00:29:10.971 15:47:41 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:29:10.972 15:47:41 -- common/autotest_common.sh@817 -- # '[' -z 86418 ']' 00:29:10.972 15:47:41 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:10.972 15:47:41 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:10.972 15:47:41 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:10.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:10.972 15:47:41 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:10.972 15:47:41 -- common/autotest_common.sh@10 -- # set +x 00:29:11.229 [2024-04-26 15:47:41.266256] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:29:11.229 [2024-04-26 15:47:41.266361] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:11.229 [2024-04-26 15:47:41.402543] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:11.486 [2024-04-26 15:47:41.526822] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:11.486 [2024-04-26 15:47:41.526883] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:11.486 [2024-04-26 15:47:41.526896] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:11.486 [2024-04-26 15:47:41.526909] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:11.486 [2024-04-26 15:47:41.526917] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:11.486 [2024-04-26 15:47:41.526954] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:12.049 15:47:42 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:12.049 15:47:42 -- common/autotest_common.sh@850 -- # return 0 00:29:12.049 15:47:42 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:29:12.049 15:47:42 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:12.049 15:47:42 -- common/autotest_common.sh@10 -- # set +x 00:29:12.049 15:47:42 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:12.049 15:47:42 -- host/mdns_discovery.sh@30 -- # rpc_cmd nvmf_set_config --discovery-filter=address 00:29:12.049 15:47:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:12.049 15:47:42 -- common/autotest_common.sh@10 -- # set +x 00:29:12.049 15:47:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:12.049 15:47:42 -- host/mdns_discovery.sh@31 -- # rpc_cmd framework_start_init 00:29:12.049 15:47:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:12.049 15:47:42 -- common/autotest_common.sh@10 -- # set +x 00:29:12.308 15:47:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:12.308 15:47:42 -- host/mdns_discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:12.308 15:47:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:12.308 15:47:42 -- common/autotest_common.sh@10 -- # set +x 00:29:12.308 [2024-04-26 15:47:42.433889] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:12.308 15:47:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:12.308 15:47:42 -- host/mdns_discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:29:12.308 15:47:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:12.308 15:47:42 -- common/autotest_common.sh@10 -- # set +x 00:29:12.308 [2024-04-26 15:47:42.446035] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:29:12.308 15:47:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:12.308 15:47:42 -- host/mdns_discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:29:12.308 15:47:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:12.308 15:47:42 -- common/autotest_common.sh@10 -- # set +x 00:29:12.308 null0 00:29:12.308 15:47:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:12.308 15:47:42 -- host/mdns_discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:29:12.308 15:47:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:12.308 15:47:42 -- common/autotest_common.sh@10 -- # set +x 00:29:12.308 null1 00:29:12.308 15:47:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:12.308 15:47:42 -- host/mdns_discovery.sh@37 -- # rpc_cmd bdev_null_create null2 1000 512 00:29:12.308 15:47:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:12.308 15:47:42 -- common/autotest_common.sh@10 -- # set +x 00:29:12.308 null2 00:29:12.308 15:47:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:12.308 15:47:42 -- host/mdns_discovery.sh@38 -- # rpc_cmd bdev_null_create null3 1000 512 00:29:12.308 15:47:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:12.308 15:47:42 -- common/autotest_common.sh@10 -- # set +x 00:29:12.308 null3 00:29:12.308 15:47:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:12.308 15:47:42 -- host/mdns_discovery.sh@39 -- # rpc_cmd bdev_wait_for_examine 00:29:12.308 15:47:42 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:12.308 15:47:42 -- common/autotest_common.sh@10 -- # set +x 00:29:12.308 15:47:42 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:12.308 15:47:42 -- host/mdns_discovery.sh@47 -- # hostpid=86473 00:29:12.308 15:47:42 -- host/mdns_discovery.sh@48 -- # waitforlisten 86473 /tmp/host.sock 00:29:12.308 15:47:42 -- common/autotest_common.sh@817 -- # '[' -z 86473 ']' 00:29:12.308 15:47:42 -- common/autotest_common.sh@821 -- # local rpc_addr=/tmp/host.sock 00:29:12.308 15:47:42 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:12.308 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:29:12.308 15:47:42 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:29:12.308 15:47:42 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:12.308 15:47:42 -- common/autotest_common.sh@10 -- # set +x 00:29:12.308 15:47:42 -- host/mdns_discovery.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:29:12.308 [2024-04-26 15:47:42.542913] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:29:12.308 [2024-04-26 15:47:42.543005] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86473 ] 00:29:12.566 [2024-04-26 15:47:42.676499] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:12.566 [2024-04-26 15:47:42.795638] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:13.497 15:47:43 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:13.497 15:47:43 -- common/autotest_common.sh@850 -- # return 0 00:29:13.497 15:47:43 -- host/mdns_discovery.sh@50 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;exit 1' SIGINT SIGTERM 00:29:13.497 15:47:43 -- host/mdns_discovery.sh@51 -- # trap 'process_shm --id $NVMF_APP_SHM_ID;nvmftestfini;kill $hostpid;kill $avahi_clientpid;kill $avahipid;' EXIT 00:29:13.497 15:47:43 -- host/mdns_discovery.sh@55 -- # avahi-daemon --kill 00:29:13.497 15:47:43 -- host/mdns_discovery.sh@57 -- # avahipid=86502 00:29:13.497 15:47:43 -- host/mdns_discovery.sh@58 -- # sleep 1 00:29:13.497 15:47:43 -- host/mdns_discovery.sh@56 -- # ip netns exec nvmf_tgt_ns_spdk avahi-daemon -f /dev/fd/63 00:29:13.497 15:47:43 -- host/mdns_discovery.sh@56 -- # echo -e '[server]\nallow-interfaces=nvmf_tgt_if,nvmf_tgt_if2\nuse-ipv4=yes\nuse-ipv6=no' 00:29:13.497 Process 1013 died: No such process; trying to remove PID file. (/run/avahi-daemon//pid) 00:29:13.497 Found user 'avahi' (UID 70) and group 'avahi' (GID 70). 00:29:13.497 Successfully dropped root privileges. 00:29:13.497 avahi-daemon 0.8 starting up. 00:29:13.497 WARNING: No NSS support for mDNS detected, consider installing nss-mdns! 00:29:13.497 Successfully called chroot(). 00:29:13.497 Successfully dropped remaining capabilities. 00:29:13.497 No service file found in /etc/avahi/services. 00:29:14.432 Joining mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:29:14.432 New relevant interface nvmf_tgt_if2.IPv4 for mDNS. 00:29:14.432 Joining mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:29:14.432 New relevant interface nvmf_tgt_if.IPv4 for mDNS. 00:29:14.432 Network interface enumeration completed. 00:29:14.432 Registering new address record for fe80::b861:3dff:fef2:9f8a on nvmf_tgt_if2.*. 00:29:14.432 Registering new address record for 10.0.0.3 on nvmf_tgt_if2.IPv4. 00:29:14.432 Registering new address record for fe80::98ea:d9ff:fed7:19e5 on nvmf_tgt_if.*. 00:29:14.432 Registering new address record for 10.0.0.2 on nvmf_tgt_if.IPv4. 00:29:14.432 Server startup complete. Host name is fedora38-cloud-1705279005-2131.local. Local service cookie is 3583126272. 00:29:14.432 15:47:44 -- host/mdns_discovery.sh@60 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:29:14.432 15:47:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:14.432 15:47:44 -- common/autotest_common.sh@10 -- # set +x 00:29:14.432 15:47:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:14.432 15:47:44 -- host/mdns_discovery.sh@61 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:29:14.432 15:47:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:14.432 15:47:44 -- common/autotest_common.sh@10 -- # set +x 00:29:14.432 15:47:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:14.432 15:47:44 -- host/mdns_discovery.sh@85 -- # notify_id=0 00:29:14.432 15:47:44 -- host/mdns_discovery.sh@91 -- # get_subsystem_names 00:29:14.432 15:47:44 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:29:14.432 15:47:44 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:14.432 15:47:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:14.432 15:47:44 -- common/autotest_common.sh@10 -- # set +x 00:29:14.432 15:47:44 -- host/mdns_discovery.sh@68 -- # sort 00:29:14.432 15:47:44 -- host/mdns_discovery.sh@68 -- # xargs 00:29:14.432 15:47:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:14.432 15:47:44 -- host/mdns_discovery.sh@91 -- # [[ '' == '' ]] 00:29:14.432 15:47:44 -- host/mdns_discovery.sh@92 -- # get_bdev_list 00:29:14.432 15:47:44 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:14.432 15:47:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:14.432 15:47:44 -- common/autotest_common.sh@10 -- # set +x 00:29:14.432 15:47:44 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:29:14.432 15:47:44 -- host/mdns_discovery.sh@64 -- # sort 00:29:14.432 15:47:44 -- host/mdns_discovery.sh@64 -- # xargs 00:29:14.432 15:47:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:14.691 15:47:44 -- host/mdns_discovery.sh@92 -- # [[ '' == '' ]] 00:29:14.691 15:47:44 -- host/mdns_discovery.sh@94 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:29:14.691 15:47:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:14.691 15:47:44 -- common/autotest_common.sh@10 -- # set +x 00:29:14.691 15:47:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:14.691 15:47:44 -- host/mdns_discovery.sh@95 -- # get_subsystem_names 00:29:14.691 15:47:44 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:29:14.691 15:47:44 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:14.691 15:47:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:14.691 15:47:44 -- host/mdns_discovery.sh@68 -- # sort 00:29:14.691 15:47:44 -- common/autotest_common.sh@10 -- # set +x 00:29:14.691 15:47:44 -- host/mdns_discovery.sh@68 -- # xargs 00:29:14.691 15:47:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:14.691 15:47:44 -- host/mdns_discovery.sh@95 -- # [[ '' == '' ]] 00:29:14.691 15:47:44 -- host/mdns_discovery.sh@96 -- # get_bdev_list 00:29:14.691 15:47:44 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:14.691 15:47:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:14.691 15:47:44 -- common/autotest_common.sh@10 -- # set +x 00:29:14.691 15:47:44 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:29:14.691 15:47:44 -- host/mdns_discovery.sh@64 -- # sort 00:29:14.691 15:47:44 -- host/mdns_discovery.sh@64 -- # xargs 00:29:14.691 15:47:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:14.691 15:47:44 -- host/mdns_discovery.sh@96 -- # [[ '' == '' ]] 00:29:14.691 15:47:44 -- host/mdns_discovery.sh@98 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:29:14.691 15:47:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:14.691 15:47:44 -- common/autotest_common.sh@10 -- # set +x 00:29:14.691 15:47:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:14.691 15:47:44 -- host/mdns_discovery.sh@99 -- # get_subsystem_names 00:29:14.691 15:47:44 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:14.691 15:47:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:14.691 15:47:44 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:29:14.691 15:47:44 -- common/autotest_common.sh@10 -- # set +x 00:29:14.691 15:47:44 -- host/mdns_discovery.sh@68 -- # sort 00:29:14.691 15:47:44 -- host/mdns_discovery.sh@68 -- # xargs 00:29:14.691 15:47:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:14.691 15:47:44 -- host/mdns_discovery.sh@99 -- # [[ '' == '' ]] 00:29:14.691 15:47:44 -- host/mdns_discovery.sh@100 -- # get_bdev_list 00:29:14.691 15:47:44 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:14.691 15:47:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:14.691 [2024-04-26 15:47:44.923022] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:29:14.691 15:47:44 -- common/autotest_common.sh@10 -- # set +x 00:29:14.691 15:47:44 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:29:14.691 15:47:44 -- host/mdns_discovery.sh@64 -- # sort 00:29:14.691 15:47:44 -- host/mdns_discovery.sh@64 -- # xargs 00:29:14.691 15:47:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:14.691 15:47:44 -- host/mdns_discovery.sh@100 -- # [[ '' == '' ]] 00:29:14.691 15:47:44 -- host/mdns_discovery.sh@104 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:14.691 15:47:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:14.691 15:47:44 -- common/autotest_common.sh@10 -- # set +x 00:29:14.691 [2024-04-26 15:47:44.983006] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:14.950 15:47:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:14.950 15:47:44 -- host/mdns_discovery.sh@108 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:29:14.950 15:47:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:14.950 15:47:44 -- common/autotest_common.sh@10 -- # set +x 00:29:14.950 15:47:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:14.950 15:47:44 -- host/mdns_discovery.sh@111 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode20 00:29:14.950 15:47:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:14.950 15:47:44 -- common/autotest_common.sh@10 -- # set +x 00:29:14.950 15:47:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:14.950 15:47:45 -- host/mdns_discovery.sh@112 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null2 00:29:14.950 15:47:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:14.950 15:47:45 -- common/autotest_common.sh@10 -- # set +x 00:29:14.950 15:47:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:14.950 15:47:45 -- host/mdns_discovery.sh@116 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode20 nqn.2021-12.io.spdk:test 00:29:14.950 15:47:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:14.950 15:47:45 -- common/autotest_common.sh@10 -- # set +x 00:29:14.950 15:47:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:14.950 15:47:45 -- host/mdns_discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.3 -s 8009 00:29:14.950 15:47:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:14.950 15:47:45 -- common/autotest_common.sh@10 -- # set +x 00:29:14.950 [2024-04-26 15:47:45.022981] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 8009 *** 00:29:14.950 15:47:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:14.950 15:47:45 -- host/mdns_discovery.sh@120 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:29:14.950 15:47:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:14.950 15:47:45 -- common/autotest_common.sh@10 -- # set +x 00:29:14.950 [2024-04-26 15:47:45.030910] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4420 *** 00:29:14.950 15:47:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:14.950 15:47:45 -- host/mdns_discovery.sh@124 -- # avahi_clientpid=86553 00:29:14.950 15:47:45 -- host/mdns_discovery.sh@125 -- # sleep 5 00:29:14.950 15:47:45 -- host/mdns_discovery.sh@123 -- # ip netns exec nvmf_tgt_ns_spdk /usr/bin/avahi-publish --domain=local --service CDC _nvme-disc._tcp 8009 NQN=nqn.2014-08.org.nvmexpress.discovery p=tcp 00:29:15.885 [2024-04-26 15:47:45.823029] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:29:15.885 Established under name 'CDC' 00:29:16.145 [2024-04-26 15:47:46.223063] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:29:16.145 [2024-04-26 15:47:46.223116] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.3) 00:29:16.145 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:29:16.145 cookie is 0 00:29:16.145 is_local: 1 00:29:16.145 our_own: 0 00:29:16.145 wide_area: 0 00:29:16.145 multicast: 1 00:29:16.145 cached: 1 00:29:16.145 [2024-04-26 15:47:46.323048] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:29:16.145 [2024-04-26 15:47:46.323101] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.2) 00:29:16.145 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:29:16.145 cookie is 0 00:29:16.145 is_local: 1 00:29:16.145 our_own: 0 00:29:16.145 wide_area: 0 00:29:16.145 multicast: 1 00:29:16.145 cached: 1 00:29:17.078 [2024-04-26 15:47:47.229565] bdev_nvme.c:6919:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:29:17.078 [2024-04-26 15:47:47.229611] bdev_nvme.c:6999:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:29:17.078 [2024-04-26 15:47:47.229630] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:29:17.078 [2024-04-26 15:47:47.315706] bdev_nvme.c:6848:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 new subsystem mdns0_nvme0 00:29:17.078 [2024-04-26 15:47:47.329281] bdev_nvme.c:6919:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:17.078 [2024-04-26 15:47:47.329307] bdev_nvme.c:6999:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:17.078 [2024-04-26 15:47:47.329324] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:17.336 [2024-04-26 15:47:47.375296] bdev_nvme.c:6738:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:29:17.336 [2024-04-26 15:47:47.375338] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:29:17.336 [2024-04-26 15:47:47.417467] bdev_nvme.c:6848:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem mdns1_nvme0 00:29:17.336 [2024-04-26 15:47:47.479811] bdev_nvme.c:6738:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:29:17.336 [2024-04-26 15:47:47.479860] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:19.881 15:47:50 -- host/mdns_discovery.sh@127 -- # get_mdns_discovery_svcs 00:29:19.881 15:47:50 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:29:19.881 15:47:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:19.881 15:47:50 -- host/mdns_discovery.sh@80 -- # sort 00:29:19.881 15:47:50 -- common/autotest_common.sh@10 -- # set +x 00:29:19.881 15:47:50 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:29:19.881 15:47:50 -- host/mdns_discovery.sh@80 -- # xargs 00:29:19.881 15:47:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:19.881 15:47:50 -- host/mdns_discovery.sh@127 -- # [[ mdns == \m\d\n\s ]] 00:29:19.881 15:47:50 -- host/mdns_discovery.sh@128 -- # get_discovery_ctrlrs 00:29:19.881 15:47:50 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:29:19.881 15:47:50 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:19.881 15:47:50 -- host/mdns_discovery.sh@76 -- # sort 00:29:19.881 15:47:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:19.881 15:47:50 -- common/autotest_common.sh@10 -- # set +x 00:29:19.881 15:47:50 -- host/mdns_discovery.sh@76 -- # xargs 00:29:19.881 15:47:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:19.881 15:47:50 -- host/mdns_discovery.sh@128 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:29:19.881 15:47:50 -- host/mdns_discovery.sh@129 -- # get_subsystem_names 00:29:19.881 15:47:50 -- host/mdns_discovery.sh@68 -- # sort 00:29:19.881 15:47:50 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:19.881 15:47:50 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:29:19.881 15:47:50 -- host/mdns_discovery.sh@68 -- # xargs 00:29:19.881 15:47:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:19.881 15:47:50 -- common/autotest_common.sh@10 -- # set +x 00:29:20.140 15:47:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:20.140 15:47:50 -- host/mdns_discovery.sh@129 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:29:20.140 15:47:50 -- host/mdns_discovery.sh@130 -- # get_bdev_list 00:29:20.140 15:47:50 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:20.140 15:47:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:20.140 15:47:50 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:29:20.140 15:47:50 -- common/autotest_common.sh@10 -- # set +x 00:29:20.140 15:47:50 -- host/mdns_discovery.sh@64 -- # sort 00:29:20.140 15:47:50 -- host/mdns_discovery.sh@64 -- # xargs 00:29:20.140 15:47:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:20.140 15:47:50 -- host/mdns_discovery.sh@130 -- # [[ mdns0_nvme0n1 mdns1_nvme0n1 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\1 ]] 00:29:20.140 15:47:50 -- host/mdns_discovery.sh@131 -- # get_subsystem_paths mdns0_nvme0 00:29:20.140 15:47:50 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:20.140 15:47:50 -- host/mdns_discovery.sh@72 -- # sort -n 00:29:20.140 15:47:50 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:29:20.140 15:47:50 -- host/mdns_discovery.sh@72 -- # xargs 00:29:20.140 15:47:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:20.140 15:47:50 -- common/autotest_common.sh@10 -- # set +x 00:29:20.140 15:47:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:20.140 15:47:50 -- host/mdns_discovery.sh@131 -- # [[ 4420 == \4\4\2\0 ]] 00:29:20.140 15:47:50 -- host/mdns_discovery.sh@132 -- # get_subsystem_paths mdns1_nvme0 00:29:20.140 15:47:50 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:20.140 15:47:50 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:29:20.140 15:47:50 -- host/mdns_discovery.sh@72 -- # sort -n 00:29:20.140 15:47:50 -- host/mdns_discovery.sh@72 -- # xargs 00:29:20.140 15:47:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:20.140 15:47:50 -- common/autotest_common.sh@10 -- # set +x 00:29:20.140 15:47:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:20.140 15:47:50 -- host/mdns_discovery.sh@132 -- # [[ 4420 == \4\4\2\0 ]] 00:29:20.140 15:47:50 -- host/mdns_discovery.sh@133 -- # get_notification_count 00:29:20.140 15:47:50 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:29:20.140 15:47:50 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:29:20.140 15:47:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:20.140 15:47:50 -- common/autotest_common.sh@10 -- # set +x 00:29:20.140 15:47:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:20.140 15:47:50 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:29:20.140 15:47:50 -- host/mdns_discovery.sh@88 -- # notify_id=2 00:29:20.140 15:47:50 -- host/mdns_discovery.sh@134 -- # [[ 2 == 2 ]] 00:29:20.140 15:47:50 -- host/mdns_discovery.sh@137 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:29:20.140 15:47:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:20.140 15:47:50 -- common/autotest_common.sh@10 -- # set +x 00:29:20.399 15:47:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:20.399 15:47:50 -- host/mdns_discovery.sh@138 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode20 null3 00:29:20.399 15:47:50 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:20.399 15:47:50 -- common/autotest_common.sh@10 -- # set +x 00:29:20.399 15:47:50 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:20.399 15:47:50 -- host/mdns_discovery.sh@139 -- # sleep 1 00:29:21.334 15:47:51 -- host/mdns_discovery.sh@141 -- # get_bdev_list 00:29:21.334 15:47:51 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:21.334 15:47:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:21.334 15:47:51 -- host/mdns_discovery.sh@64 -- # sort 00:29:21.334 15:47:51 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:29:21.334 15:47:51 -- common/autotest_common.sh@10 -- # set +x 00:29:21.334 15:47:51 -- host/mdns_discovery.sh@64 -- # xargs 00:29:21.334 15:47:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:21.334 15:47:51 -- host/mdns_discovery.sh@141 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:29:21.334 15:47:51 -- host/mdns_discovery.sh@142 -- # get_notification_count 00:29:21.334 15:47:51 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:29:21.334 15:47:51 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:29:21.334 15:47:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:21.334 15:47:51 -- common/autotest_common.sh@10 -- # set +x 00:29:21.334 15:47:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:21.334 15:47:51 -- host/mdns_discovery.sh@87 -- # notification_count=2 00:29:21.334 15:47:51 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:29:21.334 15:47:51 -- host/mdns_discovery.sh@143 -- # [[ 2 == 2 ]] 00:29:21.334 15:47:51 -- host/mdns_discovery.sh@147 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:29:21.334 15:47:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:21.334 15:47:51 -- common/autotest_common.sh@10 -- # set +x 00:29:21.335 [2024-04-26 15:47:51.578238] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:21.335 [2024-04-26 15:47:51.579324] bdev_nvme.c:6901:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:29:21.335 [2024-04-26 15:47:51.579503] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:21.335 [2024-04-26 15:47:51.579574] bdev_nvme.c:6901:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:29:21.335 [2024-04-26 15:47:51.579590] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:29:21.335 15:47:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:21.335 15:47:51 -- host/mdns_discovery.sh@148 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4421 00:29:21.335 15:47:51 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:21.335 15:47:51 -- common/autotest_common.sh@10 -- # set +x 00:29:21.335 [2024-04-26 15:47:51.586104] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.3 port 4421 *** 00:29:21.335 [2024-04-26 15:47:51.587306] bdev_nvme.c:6901:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:29:21.335 [2024-04-26 15:47:51.587503] bdev_nvme.c:6901:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:29:21.335 15:47:51 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:21.335 15:47:51 -- host/mdns_discovery.sh@149 -- # sleep 1 00:29:21.594 [2024-04-26 15:47:51.717445] bdev_nvme.c:6843:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new path for mdns0_nvme0 00:29:21.594 [2024-04-26 15:47:51.718419] bdev_nvme.c:6843:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for mdns1_nvme0 00:29:21.594 [2024-04-26 15:47:51.782665] bdev_nvme.c:6738:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:29:21.594 [2024-04-26 15:47:51.782709] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:21.594 [2024-04-26 15:47:51.782717] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:21.594 [2024-04-26 15:47:51.782742] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:21.594 [2024-04-26 15:47:51.782903] bdev_nvme.c:6738:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:29:21.594 [2024-04-26 15:47:51.782914] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:29:21.594 [2024-04-26 15:47:51.782919] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:29:21.594 [2024-04-26 15:47:51.782935] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:29:21.594 [2024-04-26 15:47:51.828545] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:29:21.594 [2024-04-26 15:47:51.828580] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:21.594 [2024-04-26 15:47:51.828627] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 found again 00:29:21.594 [2024-04-26 15:47:51.828637] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:29:22.529 15:47:52 -- host/mdns_discovery.sh@151 -- # get_subsystem_names 00:29:22.529 15:47:52 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:22.529 15:47:52 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:29:22.529 15:47:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:22.529 15:47:52 -- host/mdns_discovery.sh@68 -- # sort 00:29:22.529 15:47:52 -- common/autotest_common.sh@10 -- # set +x 00:29:22.529 15:47:52 -- host/mdns_discovery.sh@68 -- # xargs 00:29:22.529 15:47:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:22.529 15:47:52 -- host/mdns_discovery.sh@151 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:29:22.529 15:47:52 -- host/mdns_discovery.sh@152 -- # get_bdev_list 00:29:22.529 15:47:52 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:22.529 15:47:52 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:29:22.529 15:47:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:22.529 15:47:52 -- common/autotest_common.sh@10 -- # set +x 00:29:22.529 15:47:52 -- host/mdns_discovery.sh@64 -- # sort 00:29:22.529 15:47:52 -- host/mdns_discovery.sh@64 -- # xargs 00:29:22.529 15:47:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:22.529 15:47:52 -- host/mdns_discovery.sh@152 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:29:22.529 15:47:52 -- host/mdns_discovery.sh@153 -- # get_subsystem_paths mdns0_nvme0 00:29:22.529 15:47:52 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:29:22.529 15:47:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:22.529 15:47:52 -- common/autotest_common.sh@10 -- # set +x 00:29:22.529 15:47:52 -- host/mdns_discovery.sh@72 -- # sort -n 00:29:22.529 15:47:52 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:22.529 15:47:52 -- host/mdns_discovery.sh@72 -- # xargs 00:29:22.529 15:47:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:22.529 15:47:52 -- host/mdns_discovery.sh@153 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:29:22.529 15:47:52 -- host/mdns_discovery.sh@154 -- # get_subsystem_paths mdns1_nvme0 00:29:22.529 15:47:52 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:29:22.529 15:47:52 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:22.529 15:47:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:22.529 15:47:52 -- common/autotest_common.sh@10 -- # set +x 00:29:22.529 15:47:52 -- host/mdns_discovery.sh@72 -- # sort -n 00:29:22.529 15:47:52 -- host/mdns_discovery.sh@72 -- # xargs 00:29:22.529 15:47:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:22.789 15:47:52 -- host/mdns_discovery.sh@154 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:29:22.789 15:47:52 -- host/mdns_discovery.sh@155 -- # get_notification_count 00:29:22.789 15:47:52 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:29:22.789 15:47:52 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:29:22.789 15:47:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:22.789 15:47:52 -- common/autotest_common.sh@10 -- # set +x 00:29:22.789 15:47:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:22.789 15:47:52 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:29:22.789 15:47:52 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:29:22.789 15:47:52 -- host/mdns_discovery.sh@156 -- # [[ 0 == 0 ]] 00:29:22.789 15:47:52 -- host/mdns_discovery.sh@160 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:22.789 15:47:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:22.789 15:47:52 -- common/autotest_common.sh@10 -- # set +x 00:29:22.789 [2024-04-26 15:47:52.915048] bdev_nvme.c:6901:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:29:22.789 [2024-04-26 15:47:52.915096] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:22.789 [2024-04-26 15:47:52.915133] bdev_nvme.c:6901:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:29:22.789 [2024-04-26 15:47:52.915162] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:29:22.789 15:47:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:22.790 15:47:52 -- host/mdns_discovery.sh@161 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode20 -t tcp -a 10.0.0.3 -s 4420 00:29:22.790 [2024-04-26 15:47:52.919306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:22.790 [2024-04-26 15:47:52.919345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.790 [2024-04-26 15:47:52.919360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:22.790 [2024-04-26 15:47:52.919373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.790 [2024-04-26 15:47:52.919390] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:22.790 [2024-04-26 15:47:52.919404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.790 [2024-04-26 15:47:52.919415] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 ns 15:47:52 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:22.790 id:0 cdw10:00000000 cdw11:00000000 00:29:22.790 [2024-04-26 15:47:52.919427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.790 [2024-04-26 15:47:52.919436] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe21de0 is same with the state(5) to be set 00:29:22.790 15:47:52 -- common/autotest_common.sh@10 -- # set +x 00:29:22.790 [2024-04-26 15:47:52.923038] bdev_nvme.c:6901:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:29:22.790 [2024-04-26 15:47:52.923283] bdev_nvme.c:6901:discovery_aer_cb: *INFO*: Discovery[10.0.0.3:8009] got aer 00:29:22.790 [2024-04-26 15:47:52.924306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:22.790 [2024-04-26 15:47:52.924480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.790 [2024-04-26 15:47:52.924499] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:22.790 [2024-04-26 15:47:52.924510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.790 [2024-04-26 15:47:52.924520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:22.790 [2024-04-26 15:47:52.924530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.790 [2024-04-26 15:47:52.924540] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:22.790 [2024-04-26 15:47:52.924549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:22.790 [2024-04-26 15:47:52.924558] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdd40e0 is same with the state(5) to be set 00:29:22.790 15:47:52 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:22.790 15:47:52 -- host/mdns_discovery.sh@162 -- # sleep 1 00:29:22.790 [2024-04-26 15:47:52.929273] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe21de0 (9): Bad file descriptor 00:29:22.790 [2024-04-26 15:47:52.934273] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdd40e0 (9): Bad file descriptor 00:29:22.790 [2024-04-26 15:47:52.939283] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:22.790 [2024-04-26 15:47:52.939457] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.790 [2024-04-26 15:47:52.939509] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.790 [2024-04-26 15:47:52.939527] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe21de0 with addr=10.0.0.2, port=4420 00:29:22.790 [2024-04-26 15:47:52.939538] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe21de0 is same with the state(5) to be set 00:29:22.790 [2024-04-26 15:47:52.939555] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe21de0 (9): Bad file descriptor 00:29:22.790 [2024-04-26 15:47:52.939571] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:22.790 [2024-04-26 15:47:52.939580] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:22.790 [2024-04-26 15:47:52.939591] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:22.790 [2024-04-26 15:47:52.939607] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.790 [2024-04-26 15:47:52.944279] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:29:22.790 [2024-04-26 15:47:52.944416] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.790 [2024-04-26 15:47:52.944464] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.790 [2024-04-26 15:47:52.944481] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd40e0 with addr=10.0.0.3, port=4420 00:29:22.790 [2024-04-26 15:47:52.944492] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdd40e0 is same with the state(5) to be set 00:29:22.790 [2024-04-26 15:47:52.944508] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdd40e0 (9): Bad file descriptor 00:29:22.790 [2024-04-26 15:47:52.944522] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:29:22.790 [2024-04-26 15:47:52.944530] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:29:22.790 [2024-04-26 15:47:52.944539] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:29:22.790 [2024-04-26 15:47:52.944554] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.790 [2024-04-26 15:47:52.949375] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:22.790 [2024-04-26 15:47:52.949475] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.790 [2024-04-26 15:47:52.949526] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.790 [2024-04-26 15:47:52.949543] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe21de0 with addr=10.0.0.2, port=4420 00:29:22.790 [2024-04-26 15:47:52.949554] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe21de0 is same with the state(5) to be set 00:29:22.790 [2024-04-26 15:47:52.949569] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe21de0 (9): Bad file descriptor 00:29:22.790 [2024-04-26 15:47:52.949583] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:22.790 [2024-04-26 15:47:52.949591] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:22.790 [2024-04-26 15:47:52.949600] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:22.790 [2024-04-26 15:47:52.949614] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.790 [2024-04-26 15:47:52.954359] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:29:22.790 [2024-04-26 15:47:52.954439] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.790 [2024-04-26 15:47:52.954485] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.790 [2024-04-26 15:47:52.954501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd40e0 with addr=10.0.0.3, port=4420 00:29:22.790 [2024-04-26 15:47:52.954511] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdd40e0 is same with the state(5) to be set 00:29:22.790 [2024-04-26 15:47:52.954536] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdd40e0 (9): Bad file descriptor 00:29:22.790 [2024-04-26 15:47:52.954551] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:29:22.790 [2024-04-26 15:47:52.954560] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:29:22.790 [2024-04-26 15:47:52.954568] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:29:22.790 [2024-04-26 15:47:52.954582] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.790 [2024-04-26 15:47:52.959445] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:22.790 [2024-04-26 15:47:52.959525] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.790 [2024-04-26 15:47:52.959570] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.790 [2024-04-26 15:47:52.959587] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe21de0 with addr=10.0.0.2, port=4420 00:29:22.790 [2024-04-26 15:47:52.959597] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe21de0 is same with the state(5) to be set 00:29:22.790 [2024-04-26 15:47:52.959612] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe21de0 (9): Bad file descriptor 00:29:22.790 [2024-04-26 15:47:52.959626] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:22.790 [2024-04-26 15:47:52.959634] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:22.790 [2024-04-26 15:47:52.959643] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:22.790 [2024-04-26 15:47:52.959657] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.790 [2024-04-26 15:47:52.964413] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:29:22.790 [2024-04-26 15:47:52.964493] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.790 [2024-04-26 15:47:52.964550] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.790 [2024-04-26 15:47:52.964566] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd40e0 with addr=10.0.0.3, port=4420 00:29:22.790 [2024-04-26 15:47:52.964576] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdd40e0 is same with the state(5) to be set 00:29:22.790 [2024-04-26 15:47:52.964592] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdd40e0 (9): Bad file descriptor 00:29:22.790 [2024-04-26 15:47:52.964605] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:29:22.790 [2024-04-26 15:47:52.964614] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:29:22.790 [2024-04-26 15:47:52.964623] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:29:22.790 [2024-04-26 15:47:52.964636] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.790 [2024-04-26 15:47:52.969499] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:22.790 [2024-04-26 15:47:52.969587] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.790 [2024-04-26 15:47:52.969634] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.790 [2024-04-26 15:47:52.969651] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe21de0 with addr=10.0.0.2, port=4420 00:29:22.791 [2024-04-26 15:47:52.969661] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe21de0 is same with the state(5) to be set 00:29:22.791 [2024-04-26 15:47:52.969676] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe21de0 (9): Bad file descriptor 00:29:22.791 [2024-04-26 15:47:52.969690] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:22.791 [2024-04-26 15:47:52.969698] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:22.791 [2024-04-26 15:47:52.969707] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:22.791 [2024-04-26 15:47:52.969721] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.791 [2024-04-26 15:47:52.974467] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:29:22.791 [2024-04-26 15:47:52.974551] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.791 [2024-04-26 15:47:52.974600] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.791 [2024-04-26 15:47:52.974616] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd40e0 with addr=10.0.0.3, port=4420 00:29:22.791 [2024-04-26 15:47:52.974627] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdd40e0 is same with the state(5) to be set 00:29:22.791 [2024-04-26 15:47:52.974642] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdd40e0 (9): Bad file descriptor 00:29:22.791 [2024-04-26 15:47:52.974656] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:29:22.791 [2024-04-26 15:47:52.974664] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:29:22.791 [2024-04-26 15:47:52.974673] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:29:22.791 [2024-04-26 15:47:52.974686] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.791 [2024-04-26 15:47:52.979556] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:22.791 [2024-04-26 15:47:52.979636] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.791 [2024-04-26 15:47:52.979681] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.791 [2024-04-26 15:47:52.979697] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe21de0 with addr=10.0.0.2, port=4420 00:29:22.791 [2024-04-26 15:47:52.979707] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe21de0 is same with the state(5) to be set 00:29:22.791 [2024-04-26 15:47:52.979729] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe21de0 (9): Bad file descriptor 00:29:22.791 [2024-04-26 15:47:52.979743] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:22.791 [2024-04-26 15:47:52.979751] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:22.791 [2024-04-26 15:47:52.979760] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:22.791 [2024-04-26 15:47:52.979773] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.791 [2024-04-26 15:47:52.984522] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:29:22.791 [2024-04-26 15:47:52.984602] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.791 [2024-04-26 15:47:52.984647] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.791 [2024-04-26 15:47:52.984664] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd40e0 with addr=10.0.0.3, port=4420 00:29:22.791 [2024-04-26 15:47:52.984674] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdd40e0 is same with the state(5) to be set 00:29:22.791 [2024-04-26 15:47:52.984689] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdd40e0 (9): Bad file descriptor 00:29:22.791 [2024-04-26 15:47:52.984703] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:29:22.791 [2024-04-26 15:47:52.984711] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:29:22.791 [2024-04-26 15:47:52.984719] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:29:22.791 [2024-04-26 15:47:52.984733] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.791 [2024-04-26 15:47:52.989609] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:22.791 [2024-04-26 15:47:52.989688] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.791 [2024-04-26 15:47:52.989732] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.791 [2024-04-26 15:47:52.989748] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe21de0 with addr=10.0.0.2, port=4420 00:29:22.791 [2024-04-26 15:47:52.989759] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe21de0 is same with the state(5) to be set 00:29:22.791 [2024-04-26 15:47:52.989774] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe21de0 (9): Bad file descriptor 00:29:22.791 [2024-04-26 15:47:52.989787] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:22.791 [2024-04-26 15:47:52.989796] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:22.791 [2024-04-26 15:47:52.989804] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:22.791 [2024-04-26 15:47:52.989818] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.791 [2024-04-26 15:47:52.994574] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:29:22.791 [2024-04-26 15:47:52.994652] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.791 [2024-04-26 15:47:52.994697] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.791 [2024-04-26 15:47:52.994713] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd40e0 with addr=10.0.0.3, port=4420 00:29:22.791 [2024-04-26 15:47:52.994724] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdd40e0 is same with the state(5) to be set 00:29:22.791 [2024-04-26 15:47:52.994739] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdd40e0 (9): Bad file descriptor 00:29:22.791 [2024-04-26 15:47:52.994757] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:29:22.791 [2024-04-26 15:47:52.994765] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:29:22.791 [2024-04-26 15:47:52.994774] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:29:22.791 [2024-04-26 15:47:52.994787] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.791 [2024-04-26 15:47:52.999660] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:22.791 [2024-04-26 15:47:52.999738] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.791 [2024-04-26 15:47:52.999783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.791 [2024-04-26 15:47:52.999799] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe21de0 with addr=10.0.0.2, port=4420 00:29:22.791 [2024-04-26 15:47:52.999809] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe21de0 is same with the state(5) to be set 00:29:22.791 [2024-04-26 15:47:52.999830] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe21de0 (9): Bad file descriptor 00:29:22.791 [2024-04-26 15:47:52.999843] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:22.791 [2024-04-26 15:47:52.999852] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:22.791 [2024-04-26 15:47:52.999860] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:22.791 [2024-04-26 15:47:52.999874] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.791 [2024-04-26 15:47:53.004626] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:29:22.791 [2024-04-26 15:47:53.004704] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.791 [2024-04-26 15:47:53.004749] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.791 [2024-04-26 15:47:53.004766] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd40e0 with addr=10.0.0.3, port=4420 00:29:22.791 [2024-04-26 15:47:53.004776] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdd40e0 is same with the state(5) to be set 00:29:22.791 [2024-04-26 15:47:53.004791] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdd40e0 (9): Bad file descriptor 00:29:22.791 [2024-04-26 15:47:53.004804] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:29:22.791 [2024-04-26 15:47:53.004813] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:29:22.791 [2024-04-26 15:47:53.004821] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:29:22.791 [2024-04-26 15:47:53.004834] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.791 [2024-04-26 15:47:53.009711] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:22.791 [2024-04-26 15:47:53.009790] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.791 [2024-04-26 15:47:53.009835] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.791 [2024-04-26 15:47:53.009851] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe21de0 with addr=10.0.0.2, port=4420 00:29:22.791 [2024-04-26 15:47:53.009861] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe21de0 is same with the state(5) to be set 00:29:22.791 [2024-04-26 15:47:53.009876] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe21de0 (9): Bad file descriptor 00:29:22.791 [2024-04-26 15:47:53.009889] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:22.791 [2024-04-26 15:47:53.009898] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:22.791 [2024-04-26 15:47:53.009906] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:22.791 [2024-04-26 15:47:53.009920] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.791 [2024-04-26 15:47:53.014679] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:29:22.791 [2024-04-26 15:47:53.014783] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.791 [2024-04-26 15:47:53.014830] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.791 [2024-04-26 15:47:53.014846] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd40e0 with addr=10.0.0.3, port=4420 00:29:22.791 [2024-04-26 15:47:53.014857] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdd40e0 is same with the state(5) to be set 00:29:22.792 [2024-04-26 15:47:53.014872] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdd40e0 (9): Bad file descriptor 00:29:22.792 [2024-04-26 15:47:53.014886] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:29:22.792 [2024-04-26 15:47:53.014895] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:29:22.792 [2024-04-26 15:47:53.014903] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:29:22.792 [2024-04-26 15:47:53.014917] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.792 [2024-04-26 15:47:53.019767] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:22.792 [2024-04-26 15:47:53.019865] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.792 [2024-04-26 15:47:53.019914] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.792 [2024-04-26 15:47:53.019930] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe21de0 with addr=10.0.0.2, port=4420 00:29:22.792 [2024-04-26 15:47:53.019941] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe21de0 is same with the state(5) to be set 00:29:22.792 [2024-04-26 15:47:53.019958] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe21de0 (9): Bad file descriptor 00:29:22.792 [2024-04-26 15:47:53.019972] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:22.792 [2024-04-26 15:47:53.019980] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:22.792 [2024-04-26 15:47:53.019990] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:22.792 [2024-04-26 15:47:53.020004] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.792 [2024-04-26 15:47:53.024754] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:29:22.792 [2024-04-26 15:47:53.024841] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.792 [2024-04-26 15:47:53.024887] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.792 [2024-04-26 15:47:53.024903] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd40e0 with addr=10.0.0.3, port=4420 00:29:22.792 [2024-04-26 15:47:53.024913] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdd40e0 is same with the state(5) to be set 00:29:22.792 [2024-04-26 15:47:53.024929] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdd40e0 (9): Bad file descriptor 00:29:22.792 [2024-04-26 15:47:53.024943] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:29:22.792 [2024-04-26 15:47:53.024951] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:29:22.792 [2024-04-26 15:47:53.024960] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:29:22.792 [2024-04-26 15:47:53.024974] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.792 [2024-04-26 15:47:53.029840] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:22.792 [2024-04-26 15:47:53.029938] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.792 [2024-04-26 15:47:53.029982] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.792 [2024-04-26 15:47:53.029999] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe21de0 with addr=10.0.0.2, port=4420 00:29:22.792 [2024-04-26 15:47:53.030009] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe21de0 is same with the state(5) to be set 00:29:22.792 [2024-04-26 15:47:53.030024] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe21de0 (9): Bad file descriptor 00:29:22.792 [2024-04-26 15:47:53.030038] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:22.792 [2024-04-26 15:47:53.030047] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:22.792 [2024-04-26 15:47:53.030055] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:22.792 [2024-04-26 15:47:53.030069] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.792 [2024-04-26 15:47:53.034807] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:29:22.792 [2024-04-26 15:47:53.034917] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.792 [2024-04-26 15:47:53.034962] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.792 [2024-04-26 15:47:53.034979] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd40e0 with addr=10.0.0.3, port=4420 00:29:22.792 [2024-04-26 15:47:53.034989] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdd40e0 is same with the state(5) to be set 00:29:22.792 [2024-04-26 15:47:53.035005] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdd40e0 (9): Bad file descriptor 00:29:22.792 [2024-04-26 15:47:53.035018] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:29:22.792 [2024-04-26 15:47:53.035026] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:29:22.792 [2024-04-26 15:47:53.035035] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:29:22.792 [2024-04-26 15:47:53.035048] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.792 [2024-04-26 15:47:53.039910] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:22.792 [2024-04-26 15:47:53.040017] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.792 [2024-04-26 15:47:53.040062] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.792 [2024-04-26 15:47:53.040078] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe21de0 with addr=10.0.0.2, port=4420 00:29:22.792 [2024-04-26 15:47:53.040088] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe21de0 is same with the state(5) to be set 00:29:22.792 [2024-04-26 15:47:53.040103] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe21de0 (9): Bad file descriptor 00:29:22.792 [2024-04-26 15:47:53.040117] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:22.792 [2024-04-26 15:47:53.040125] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:22.792 [2024-04-26 15:47:53.040134] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:22.792 [2024-04-26 15:47:53.040148] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.792 [2024-04-26 15:47:53.044875] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:29:22.792 [2024-04-26 15:47:53.044971] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.792 [2024-04-26 15:47:53.045016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.792 [2024-04-26 15:47:53.045032] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd40e0 with addr=10.0.0.3, port=4420 00:29:22.792 [2024-04-26 15:47:53.045043] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdd40e0 is same with the state(5) to be set 00:29:22.792 [2024-04-26 15:47:53.045058] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdd40e0 (9): Bad file descriptor 00:29:22.792 [2024-04-26 15:47:53.045072] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:29:22.792 [2024-04-26 15:47:53.045080] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:29:22.792 [2024-04-26 15:47:53.045089] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:29:22.792 [2024-04-26 15:47:53.045103] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.792 [2024-04-26 15:47:53.049974] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:29:22.792 [2024-04-26 15:47:53.050083] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.792 [2024-04-26 15:47:53.050128] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.792 [2024-04-26 15:47:53.050144] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe21de0 with addr=10.0.0.2, port=4420 00:29:22.792 [2024-04-26 15:47:53.050166] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe21de0 is same with the state(5) to be set 00:29:22.792 [2024-04-26 15:47:53.050183] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe21de0 (9): Bad file descriptor 00:29:22.792 [2024-04-26 15:47:53.050217] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:29:22.792 [2024-04-26 15:47:53.050227] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:29:22.792 [2024-04-26 15:47:53.050236] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:29:22.792 [2024-04-26 15:47:53.050250] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.792 [2024-04-26 15:47:53.054943] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode20] resetting controller 00:29:22.792 [2024-04-26 15:47:53.055057] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.792 [2024-04-26 15:47:53.055103] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:22.792 [2024-04-26 15:47:53.055119] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xdd40e0 with addr=10.0.0.3, port=4420 00:29:22.792 [2024-04-26 15:47:53.055129] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdd40e0 is same with the state(5) to be set 00:29:22.792 [2024-04-26 15:47:53.055144] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdd40e0 (9): Bad file descriptor 00:29:22.792 [2024-04-26 15:47:53.055172] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode20] Ctrlr is in error state 00:29:22.792 [2024-04-26 15:47:53.055181] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode20] controller reinitialization failed 00:29:22.792 [2024-04-26 15:47:53.055190] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode20] in failed state. 00:29:22.792 [2024-04-26 15:47:53.055204] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:29:22.792 [2024-04-26 15:47:53.055248] bdev_nvme.c:6706:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:29:22.792 [2024-04-26 15:47:53.055269] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:22.792 [2024-04-26 15:47:53.055294] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:22.792 [2024-04-26 15:47:53.056258] bdev_nvme.c:6706:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4420 not found 00:29:22.792 [2024-04-26 15:47:53.056287] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:29:22.792 [2024-04-26 15:47:53.056306] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:29:23.051 [2024-04-26 15:47:53.141448] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:23.051 [2024-04-26 15:47:53.142380] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:29:23.984 15:47:53 -- host/mdns_discovery.sh@164 -- # get_subsystem_names 00:29:23.984 15:47:53 -- host/mdns_discovery.sh@68 -- # sort 00:29:23.984 15:47:53 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:29:23.984 15:47:53 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:23.984 15:47:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:23.984 15:47:53 -- common/autotest_common.sh@10 -- # set +x 00:29:23.984 15:47:53 -- host/mdns_discovery.sh@68 -- # xargs 00:29:23.984 15:47:53 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:23.984 15:47:53 -- host/mdns_discovery.sh@164 -- # [[ mdns0_nvme0 mdns1_nvme0 == \m\d\n\s\0\_\n\v\m\e\0\ \m\d\n\s\1\_\n\v\m\e\0 ]] 00:29:23.984 15:47:53 -- host/mdns_discovery.sh@165 -- # get_bdev_list 00:29:23.984 15:47:53 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:23.984 15:47:53 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:29:23.984 15:47:53 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:23.984 15:47:53 -- host/mdns_discovery.sh@64 -- # sort 00:29:23.984 15:47:53 -- host/mdns_discovery.sh@64 -- # xargs 00:29:23.984 15:47:53 -- common/autotest_common.sh@10 -- # set +x 00:29:23.984 15:47:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:23.984 15:47:54 -- host/mdns_discovery.sh@165 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:29:23.984 15:47:54 -- host/mdns_discovery.sh@166 -- # get_subsystem_paths mdns0_nvme0 00:29:23.984 15:47:54 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns0_nvme0 00:29:23.984 15:47:54 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:23.984 15:47:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:23.984 15:47:54 -- host/mdns_discovery.sh@72 -- # sort -n 00:29:23.984 15:47:54 -- common/autotest_common.sh@10 -- # set +x 00:29:23.984 15:47:54 -- host/mdns_discovery.sh@72 -- # xargs 00:29:23.984 15:47:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:23.984 15:47:54 -- host/mdns_discovery.sh@166 -- # [[ 4421 == \4\4\2\1 ]] 00:29:23.984 15:47:54 -- host/mdns_discovery.sh@167 -- # get_subsystem_paths mdns1_nvme0 00:29:23.984 15:47:54 -- host/mdns_discovery.sh@72 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n mdns1_nvme0 00:29:23.984 15:47:54 -- host/mdns_discovery.sh@72 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:29:23.984 15:47:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:23.984 15:47:54 -- common/autotest_common.sh@10 -- # set +x 00:29:23.984 15:47:54 -- host/mdns_discovery.sh@72 -- # sort -n 00:29:23.984 15:47:54 -- host/mdns_discovery.sh@72 -- # xargs 00:29:23.984 15:47:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:23.984 15:47:54 -- host/mdns_discovery.sh@167 -- # [[ 4421 == \4\4\2\1 ]] 00:29:23.984 15:47:54 -- host/mdns_discovery.sh@168 -- # get_notification_count 00:29:23.984 15:47:54 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:29:23.984 15:47:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:23.984 15:47:54 -- common/autotest_common.sh@10 -- # set +x 00:29:23.984 15:47:54 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:29:23.984 15:47:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:23.984 15:47:54 -- host/mdns_discovery.sh@87 -- # notification_count=0 00:29:23.984 15:47:54 -- host/mdns_discovery.sh@88 -- # notify_id=4 00:29:23.984 15:47:54 -- host/mdns_discovery.sh@169 -- # [[ 0 == 0 ]] 00:29:23.984 15:47:54 -- host/mdns_discovery.sh@171 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:29:23.984 15:47:54 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:23.984 15:47:54 -- common/autotest_common.sh@10 -- # set +x 00:29:23.984 15:47:54 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:23.984 15:47:54 -- host/mdns_discovery.sh@172 -- # sleep 1 00:29:24.241 [2024-04-26 15:47:54.323358] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:29:25.174 15:47:55 -- host/mdns_discovery.sh@174 -- # get_mdns_discovery_svcs 00:29:25.174 15:47:55 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:29:25.174 15:47:55 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:29:25.174 15:47:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:25.174 15:47:55 -- common/autotest_common.sh@10 -- # set +x 00:29:25.174 15:47:55 -- host/mdns_discovery.sh@80 -- # sort 00:29:25.174 15:47:55 -- host/mdns_discovery.sh@80 -- # xargs 00:29:25.174 15:47:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:25.174 15:47:55 -- host/mdns_discovery.sh@174 -- # [[ '' == '' ]] 00:29:25.174 15:47:55 -- host/mdns_discovery.sh@175 -- # get_subsystem_names 00:29:25.174 15:47:55 -- host/mdns_discovery.sh@68 -- # jq -r '.[].name' 00:29:25.174 15:47:55 -- host/mdns_discovery.sh@68 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:29:25.174 15:47:55 -- host/mdns_discovery.sh@68 -- # sort 00:29:25.174 15:47:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:25.174 15:47:55 -- host/mdns_discovery.sh@68 -- # xargs 00:29:25.174 15:47:55 -- common/autotest_common.sh@10 -- # set +x 00:29:25.174 15:47:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:25.174 15:47:55 -- host/mdns_discovery.sh@175 -- # [[ '' == '' ]] 00:29:25.174 15:47:55 -- host/mdns_discovery.sh@176 -- # get_bdev_list 00:29:25.174 15:47:55 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:25.174 15:47:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:25.174 15:47:55 -- common/autotest_common.sh@10 -- # set +x 00:29:25.174 15:47:55 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:29:25.174 15:47:55 -- host/mdns_discovery.sh@64 -- # xargs 00:29:25.174 15:47:55 -- host/mdns_discovery.sh@64 -- # sort 00:29:25.174 15:47:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:25.174 15:47:55 -- host/mdns_discovery.sh@176 -- # [[ '' == '' ]] 00:29:25.174 15:47:55 -- host/mdns_discovery.sh@177 -- # get_notification_count 00:29:25.174 15:47:55 -- host/mdns_discovery.sh@87 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 4 00:29:25.174 15:47:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:25.174 15:47:55 -- host/mdns_discovery.sh@87 -- # jq '. | length' 00:29:25.174 15:47:55 -- common/autotest_common.sh@10 -- # set +x 00:29:25.174 15:47:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:25.174 15:47:55 -- host/mdns_discovery.sh@87 -- # notification_count=4 00:29:25.174 15:47:55 -- host/mdns_discovery.sh@88 -- # notify_id=8 00:29:25.174 15:47:55 -- host/mdns_discovery.sh@178 -- # [[ 4 == 4 ]] 00:29:25.174 15:47:55 -- host/mdns_discovery.sh@181 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:29:25.174 15:47:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:25.174 15:47:55 -- common/autotest_common.sh@10 -- # set +x 00:29:25.432 15:47:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:25.432 15:47:55 -- host/mdns_discovery.sh@182 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:29:25.432 15:47:55 -- common/autotest_common.sh@638 -- # local es=0 00:29:25.432 15:47:55 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:29:25.432 15:47:55 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:29:25.432 15:47:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:25.432 15:47:55 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:29:25.432 15:47:55 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:25.432 15:47:55 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b mdns -s _nvme-disc._http -q nqn.2021-12.io.spdk:test 00:29:25.432 15:47:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:25.432 15:47:55 -- common/autotest_common.sh@10 -- # set +x 00:29:25.432 [2024-04-26 15:47:55.475144] bdev_mdns_client.c: 470:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running with name mdns 00:29:25.432 2024/04/26 15:47:55 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:mdns svcname:_nvme-disc._http], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:29:25.432 request: 00:29:25.432 { 00:29:25.432 "method": "bdev_nvme_start_mdns_discovery", 00:29:25.432 "params": { 00:29:25.432 "name": "mdns", 00:29:25.432 "svcname": "_nvme-disc._http", 00:29:25.432 "hostnqn": "nqn.2021-12.io.spdk:test" 00:29:25.432 } 00:29:25.432 } 00:29:25.432 Got JSON-RPC error response 00:29:25.432 GoRPCClient: error on JSON-RPC call 00:29:25.432 15:47:55 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:29:25.432 15:47:55 -- common/autotest_common.sh@641 -- # es=1 00:29:25.432 15:47:55 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:25.432 15:47:55 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:29:25.432 15:47:55 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:25.432 15:47:55 -- host/mdns_discovery.sh@183 -- # sleep 5 00:29:25.691 [2024-04-26 15:47:55.863912] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) CACHE_EXHAUSTED 00:29:25.691 [2024-04-26 15:47:55.963909] bdev_mdns_client.c: 395:mdns_browse_handler: *INFO*: (Browser) ALL_FOR_NOW 00:29:25.949 [2024-04-26 15:47:56.063921] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:29:25.949 [2024-04-26 15:47:56.063985] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.3) 00:29:25.949 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:29:25.949 cookie is 0 00:29:25.949 is_local: 1 00:29:25.949 our_own: 0 00:29:25.949 wide_area: 0 00:29:25.949 multicast: 1 00:29:25.949 cached: 1 00:29:25.949 [2024-04-26 15:47:56.163944] bdev_mdns_client.c: 254:mdns_resolve_handler: *INFO*: Service 'CDC' of type '_nvme-disc._tcp' in domain 'local' 00:29:25.949 [2024-04-26 15:47:56.164025] bdev_mdns_client.c: 259:mdns_resolve_handler: *INFO*: fedora38-cloud-1705279005-2131.local:8009 (10.0.0.2) 00:29:25.949 TXT="p=tcp" "NQN=nqn.2014-08.org.nvmexpress.discovery" 00:29:25.949 cookie is 0 00:29:25.949 is_local: 1 00:29:25.949 our_own: 0 00:29:25.949 wide_area: 0 00:29:25.949 multicast: 1 00:29:25.949 cached: 1 00:29:26.882 [2024-04-26 15:47:57.071485] bdev_nvme.c:6919:discovery_attach_cb: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr attached 00:29:26.882 [2024-04-26 15:47:57.071544] bdev_nvme.c:6999:discovery_poller: *INFO*: Discovery[10.0.0.3:8009] discovery ctrlr connected 00:29:26.882 [2024-04-26 15:47:57.071567] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.3:8009] sent discovery log page command 00:29:26.882 [2024-04-26 15:47:57.157628] bdev_nvme.c:6848:discovery_log_page_cb: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 new subsystem mdns0_nvme0 00:29:26.882 [2024-04-26 15:47:57.171180] bdev_nvme.c:6919:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:29:26.882 [2024-04-26 15:47:57.171218] bdev_nvme.c:6999:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:29:26.882 [2024-04-26 15:47:57.171239] bdev_nvme.c:6882:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:29:27.140 [2024-04-26 15:47:57.221958] bdev_nvme.c:6738:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.3:8009] attach mdns0_nvme0 done 00:29:27.140 [2024-04-26 15:47:57.222027] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.3:8009] NVM nqn.2016-06.io.spdk:cnode20:10.0.0.3:4421 found again 00:29:27.140 [2024-04-26 15:47:57.257299] bdev_nvme.c:6848:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem mdns1_nvme0 00:29:27.140 [2024-04-26 15:47:57.317205] bdev_nvme.c:6738:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach mdns1_nvme0 done 00:29:27.140 [2024-04-26 15:47:57.317270] bdev_nvme.c:6697:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:29:30.424 15:48:00 -- host/mdns_discovery.sh@185 -- # get_mdns_discovery_svcs 00:29:30.424 15:48:00 -- host/mdns_discovery.sh@80 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_mdns_discovery_info 00:29:30.424 15:48:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:30.424 15:48:00 -- host/mdns_discovery.sh@80 -- # jq -r '.[].name' 00:29:30.424 15:48:00 -- common/autotest_common.sh@10 -- # set +x 00:29:30.424 15:48:00 -- host/mdns_discovery.sh@80 -- # sort 00:29:30.424 15:48:00 -- host/mdns_discovery.sh@80 -- # xargs 00:29:30.424 15:48:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:30.424 15:48:00 -- host/mdns_discovery.sh@185 -- # [[ mdns == \m\d\n\s ]] 00:29:30.424 15:48:00 -- host/mdns_discovery.sh@186 -- # get_discovery_ctrlrs 00:29:30.424 15:48:00 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:30.424 15:48:00 -- host/mdns_discovery.sh@76 -- # sort 00:29:30.424 15:48:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:30.424 15:48:00 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:29:30.424 15:48:00 -- common/autotest_common.sh@10 -- # set +x 00:29:30.424 15:48:00 -- host/mdns_discovery.sh@76 -- # xargs 00:29:30.424 15:48:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:30.424 15:48:00 -- host/mdns_discovery.sh@186 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:29:30.424 15:48:00 -- host/mdns_discovery.sh@187 -- # get_bdev_list 00:29:30.424 15:48:00 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:29:30.424 15:48:00 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:30.424 15:48:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:30.424 15:48:00 -- host/mdns_discovery.sh@64 -- # sort 00:29:30.424 15:48:00 -- common/autotest_common.sh@10 -- # set +x 00:29:30.424 15:48:00 -- host/mdns_discovery.sh@64 -- # xargs 00:29:30.424 15:48:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:30.424 15:48:00 -- host/mdns_discovery.sh@187 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:29:30.424 15:48:00 -- host/mdns_discovery.sh@190 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:29:30.424 15:48:00 -- common/autotest_common.sh@638 -- # local es=0 00:29:30.424 15:48:00 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:29:30.424 15:48:00 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:29:30.424 15:48:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:30.424 15:48:00 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:29:30.424 15:48:00 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:29:30.424 15:48:00 -- common/autotest_common.sh@641 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_mdns_discovery -b cdc -s _nvme-disc._tcp -q nqn.2021-12.io.spdk:test 00:29:30.424 15:48:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:30.424 15:48:00 -- common/autotest_common.sh@10 -- # set +x 00:29:30.424 [2024-04-26 15:48:00.680472] bdev_mdns_client.c: 475:bdev_nvme_start_mdns_discovery: *ERROR*: mDNS discovery already running for service _nvme-disc._tcp 00:29:30.424 2024/04/26 15:48:00 error on JSON-RPC call, method: bdev_nvme_start_mdns_discovery, params: map[hostnqn:nqn.2021-12.io.spdk:test name:cdc svcname:_nvme-disc._tcp], err: error received for bdev_nvme_start_mdns_discovery method, err: Code=-17 Msg=File exists 00:29:30.424 request: 00:29:30.424 { 00:29:30.424 "method": "bdev_nvme_start_mdns_discovery", 00:29:30.424 "params": { 00:29:30.424 "name": "cdc", 00:29:30.424 "svcname": "_nvme-disc._tcp", 00:29:30.424 "hostnqn": "nqn.2021-12.io.spdk:test" 00:29:30.424 } 00:29:30.424 } 00:29:30.424 Got JSON-RPC error response 00:29:30.424 GoRPCClient: error on JSON-RPC call 00:29:30.424 15:48:00 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:29:30.424 15:48:00 -- common/autotest_common.sh@641 -- # es=1 00:29:30.424 15:48:00 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:29:30.425 15:48:00 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:29:30.425 15:48:00 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:29:30.425 15:48:00 -- host/mdns_discovery.sh@191 -- # get_discovery_ctrlrs 00:29:30.425 15:48:00 -- host/mdns_discovery.sh@76 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:29:30.425 15:48:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:30.425 15:48:00 -- host/mdns_discovery.sh@76 -- # jq -r '.[].name' 00:29:30.425 15:48:00 -- common/autotest_common.sh@10 -- # set +x 00:29:30.425 15:48:00 -- host/mdns_discovery.sh@76 -- # sort 00:29:30.425 15:48:00 -- host/mdns_discovery.sh@76 -- # xargs 00:29:30.425 15:48:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:30.683 15:48:00 -- host/mdns_discovery.sh@191 -- # [[ mdns0_nvme mdns1_nvme == \m\d\n\s\0\_\n\v\m\e\ \m\d\n\s\1\_\n\v\m\e ]] 00:29:30.683 15:48:00 -- host/mdns_discovery.sh@192 -- # get_bdev_list 00:29:30.683 15:48:00 -- host/mdns_discovery.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:29:30.683 15:48:00 -- host/mdns_discovery.sh@64 -- # jq -r '.[].name' 00:29:30.683 15:48:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:30.683 15:48:00 -- host/mdns_discovery.sh@64 -- # sort 00:29:30.683 15:48:00 -- common/autotest_common.sh@10 -- # set +x 00:29:30.683 15:48:00 -- host/mdns_discovery.sh@64 -- # xargs 00:29:30.683 15:48:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:30.683 15:48:00 -- host/mdns_discovery.sh@192 -- # [[ mdns0_nvme0n1 mdns0_nvme0n2 mdns1_nvme0n1 mdns1_nvme0n2 == \m\d\n\s\0\_\n\v\m\e\0\n\1\ \m\d\n\s\0\_\n\v\m\e\0\n\2\ \m\d\n\s\1\_\n\v\m\e\0\n\1\ \m\d\n\s\1\_\n\v\m\e\0\n\2 ]] 00:29:30.683 15:48:00 -- host/mdns_discovery.sh@193 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_mdns_discovery -b mdns 00:29:30.683 15:48:00 -- common/autotest_common.sh@549 -- # xtrace_disable 00:29:30.683 15:48:00 -- common/autotest_common.sh@10 -- # set +x 00:29:30.683 15:48:00 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:29:30.683 15:48:00 -- host/mdns_discovery.sh@195 -- # trap - SIGINT SIGTERM EXIT 00:29:30.683 15:48:00 -- host/mdns_discovery.sh@197 -- # kill 86473 00:29:30.683 15:48:00 -- host/mdns_discovery.sh@200 -- # wait 86473 00:29:30.683 [2024-04-26 15:48:00.966067] bdev_mdns_client.c: 424:bdev_nvme_avahi_iterate: *INFO*: Stopping avahi poller for service _nvme-disc._tcp 00:29:30.940 15:48:01 -- host/mdns_discovery.sh@201 -- # kill 86553 00:29:30.940 Got SIGTERM, quitting. 00:29:30.940 15:48:01 -- host/mdns_discovery.sh@202 -- # kill 86502 00:29:30.940 15:48:01 -- host/mdns_discovery.sh@203 -- # nvmftestfini 00:29:30.940 15:48:01 -- nvmf/common.sh@477 -- # nvmfcleanup 00:29:30.940 Got SIGTERM, quitting. 00:29:30.940 15:48:01 -- nvmf/common.sh@117 -- # sync 00:29:30.940 Leaving mDNS multicast group on interface nvmf_tgt_if2.IPv4 with address 10.0.0.3. 00:29:30.940 Leaving mDNS multicast group on interface nvmf_tgt_if.IPv4 with address 10.0.0.2. 00:29:30.940 avahi-daemon 0.8 exiting. 00:29:30.940 15:48:01 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:30.940 15:48:01 -- nvmf/common.sh@120 -- # set +e 00:29:30.940 15:48:01 -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:30.940 15:48:01 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:30.940 rmmod nvme_tcp 00:29:30.940 rmmod nvme_fabrics 00:29:30.940 rmmod nvme_keyring 00:29:31.197 15:48:01 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:31.197 15:48:01 -- nvmf/common.sh@124 -- # set -e 00:29:31.197 15:48:01 -- nvmf/common.sh@125 -- # return 0 00:29:31.197 15:48:01 -- nvmf/common.sh@478 -- # '[' -n 86418 ']' 00:29:31.197 15:48:01 -- nvmf/common.sh@479 -- # killprocess 86418 00:29:31.197 15:48:01 -- common/autotest_common.sh@936 -- # '[' -z 86418 ']' 00:29:31.197 15:48:01 -- common/autotest_common.sh@940 -- # kill -0 86418 00:29:31.197 15:48:01 -- common/autotest_common.sh@941 -- # uname 00:29:31.197 15:48:01 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:29:31.197 15:48:01 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 86418 00:29:31.197 killing process with pid 86418 00:29:31.197 15:48:01 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:29:31.197 15:48:01 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:29:31.197 15:48:01 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 86418' 00:29:31.197 15:48:01 -- common/autotest_common.sh@955 -- # kill 86418 00:29:31.197 15:48:01 -- common/autotest_common.sh@960 -- # wait 86418 00:29:31.455 15:48:01 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:29:31.455 15:48:01 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:29:31.455 15:48:01 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:29:31.455 15:48:01 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:31.455 15:48:01 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:31.455 15:48:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:31.455 15:48:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:31.455 15:48:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:31.455 15:48:01 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:29:31.455 00:29:31.455 real 0m20.828s 00:29:31.455 user 0m40.612s 00:29:31.455 sys 0m2.151s 00:29:31.455 15:48:01 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:29:31.455 15:48:01 -- common/autotest_common.sh@10 -- # set +x 00:29:31.455 ************************************ 00:29:31.455 END TEST nvmf_mdns_discovery 00:29:31.455 ************************************ 00:29:31.455 15:48:01 -- nvmf/nvmf.sh@113 -- # [[ 1 -eq 1 ]] 00:29:31.455 15:48:01 -- nvmf/nvmf.sh@114 -- # run_test nvmf_multipath /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:29:31.455 15:48:01 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:29:31.455 15:48:01 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:29:31.455 15:48:01 -- common/autotest_common.sh@10 -- # set +x 00:29:31.455 ************************************ 00:29:31.455 START TEST nvmf_multipath 00:29:31.455 ************************************ 00:29:31.455 15:48:01 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/multipath.sh --transport=tcp 00:29:31.713 * Looking for test storage... 00:29:31.713 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:29:31.713 15:48:01 -- host/multipath.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:29:31.713 15:48:01 -- nvmf/common.sh@7 -- # uname -s 00:29:31.713 15:48:01 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:31.713 15:48:01 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:31.713 15:48:01 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:31.713 15:48:01 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:31.713 15:48:01 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:31.713 15:48:01 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:31.713 15:48:01 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:31.713 15:48:01 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:31.713 15:48:01 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:31.713 15:48:01 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:31.713 15:48:01 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:29:31.713 15:48:01 -- nvmf/common.sh@18 -- # NVME_HOSTID=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:29:31.713 15:48:01 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:31.713 15:48:01 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:31.713 15:48:01 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:29:31.713 15:48:01 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:31.713 15:48:01 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:29:31.713 15:48:01 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:31.713 15:48:01 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:31.714 15:48:01 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:31.714 15:48:01 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.714 15:48:01 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.714 15:48:01 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.714 15:48:01 -- paths/export.sh@5 -- # export PATH 00:29:31.714 15:48:01 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:31.714 15:48:01 -- nvmf/common.sh@47 -- # : 0 00:29:31.714 15:48:01 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:31.714 15:48:01 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:31.714 15:48:01 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:31.714 15:48:01 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:31.714 15:48:01 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:31.714 15:48:01 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:31.714 15:48:01 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:31.714 15:48:01 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:31.714 15:48:01 -- host/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:31.714 15:48:01 -- host/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:31.714 15:48:01 -- host/multipath.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:31.714 15:48:01 -- host/multipath.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:29:31.714 15:48:01 -- host/multipath.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:31.714 15:48:01 -- host/multipath.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:29:31.714 15:48:01 -- host/multipath.sh@30 -- # nvmftestinit 00:29:31.714 15:48:01 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:29:31.714 15:48:01 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:31.714 15:48:01 -- nvmf/common.sh@437 -- # prepare_net_devs 00:29:31.714 15:48:01 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:29:31.714 15:48:01 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:29:31.714 15:48:01 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:31.714 15:48:01 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:29:31.714 15:48:01 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:31.714 15:48:01 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:29:31.714 15:48:01 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:29:31.714 15:48:01 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:29:31.714 15:48:01 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:29:31.714 15:48:01 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:29:31.714 15:48:01 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:29:31.714 15:48:01 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:31.714 15:48:01 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:31.714 15:48:01 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:29:31.714 15:48:01 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:29:31.714 15:48:01 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:29:31.714 15:48:01 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:29:31.714 15:48:01 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:29:31.714 15:48:01 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:31.714 15:48:01 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:29:31.714 15:48:01 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:29:31.714 15:48:01 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:29:31.714 15:48:01 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:29:31.714 15:48:01 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:29:31.714 15:48:01 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:29:31.714 Cannot find device "nvmf_tgt_br" 00:29:31.714 15:48:01 -- nvmf/common.sh@155 -- # true 00:29:31.714 15:48:01 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:29:31.714 Cannot find device "nvmf_tgt_br2" 00:29:31.714 15:48:01 -- nvmf/common.sh@156 -- # true 00:29:31.714 15:48:01 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:29:31.714 15:48:01 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:29:31.714 Cannot find device "nvmf_tgt_br" 00:29:31.714 15:48:01 -- nvmf/common.sh@158 -- # true 00:29:31.714 15:48:01 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:29:31.714 Cannot find device "nvmf_tgt_br2" 00:29:31.714 15:48:01 -- nvmf/common.sh@159 -- # true 00:29:31.714 15:48:01 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:29:31.714 15:48:01 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:29:31.714 15:48:01 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:29:31.714 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:31.714 15:48:01 -- nvmf/common.sh@162 -- # true 00:29:31.714 15:48:01 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:29:31.714 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:29:31.714 15:48:01 -- nvmf/common.sh@163 -- # true 00:29:31.714 15:48:01 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:29:31.714 15:48:01 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:29:31.714 15:48:01 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:29:31.714 15:48:01 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:29:31.714 15:48:01 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:29:31.714 15:48:01 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:29:31.714 15:48:01 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:29:31.714 15:48:01 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:29:31.714 15:48:01 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:29:31.973 15:48:02 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:29:31.973 15:48:02 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:29:31.973 15:48:02 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:29:31.973 15:48:02 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:29:31.973 15:48:02 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:29:31.973 15:48:02 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:29:31.973 15:48:02 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:29:31.973 15:48:02 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:29:31.973 15:48:02 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:29:31.973 15:48:02 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:29:31.973 15:48:02 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:29:31.973 15:48:02 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:29:31.973 15:48:02 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:29:31.973 15:48:02 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:29:31.973 15:48:02 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:29:31.973 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:31.973 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.114 ms 00:29:31.973 00:29:31.973 --- 10.0.0.2 ping statistics --- 00:29:31.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:31.973 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:29:31.973 15:48:02 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:29:31.973 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:29:31.973 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.136 ms 00:29:31.973 00:29:31.973 --- 10.0.0.3 ping statistics --- 00:29:31.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:31.973 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:29:31.973 15:48:02 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:29:31.973 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:31.973 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.045 ms 00:29:31.973 00:29:31.973 --- 10.0.0.1 ping statistics --- 00:29:31.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:31.973 rtt min/avg/max/mdev = 0.045/0.045/0.045/0.000 ms 00:29:31.973 15:48:02 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:31.973 15:48:02 -- nvmf/common.sh@422 -- # return 0 00:29:31.973 15:48:02 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:29:31.973 15:48:02 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:31.973 15:48:02 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:29:31.973 15:48:02 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:29:31.973 15:48:02 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:31.973 15:48:02 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:29:31.973 15:48:02 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:29:31.973 15:48:02 -- host/multipath.sh@32 -- # nvmfappstart -m 0x3 00:29:31.973 15:48:02 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:29:31.973 15:48:02 -- common/autotest_common.sh@710 -- # xtrace_disable 00:29:31.973 15:48:02 -- common/autotest_common.sh@10 -- # set +x 00:29:31.973 15:48:02 -- nvmf/common.sh@470 -- # nvmfpid=87074 00:29:31.973 15:48:02 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:29:31.973 15:48:02 -- nvmf/common.sh@471 -- # waitforlisten 87074 00:29:31.973 15:48:02 -- common/autotest_common.sh@817 -- # '[' -z 87074 ']' 00:29:31.973 15:48:02 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:31.973 15:48:02 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:31.973 15:48:02 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:31.973 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:31.973 15:48:02 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:31.973 15:48:02 -- common/autotest_common.sh@10 -- # set +x 00:29:31.973 [2024-04-26 15:48:02.206369] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:29:31.973 [2024-04-26 15:48:02.206500] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:32.230 [2024-04-26 15:48:02.344077] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:32.488 [2024-04-26 15:48:02.528328] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:32.488 [2024-04-26 15:48:02.528760] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:32.488 [2024-04-26 15:48:02.529004] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:32.488 [2024-04-26 15:48:02.529249] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:32.488 [2024-04-26 15:48:02.529276] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:32.488 [2024-04-26 15:48:02.529430] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:29:32.488 [2024-04-26 15:48:02.529538] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:29:33.117 15:48:03 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:33.117 15:48:03 -- common/autotest_common.sh@850 -- # return 0 00:29:33.117 15:48:03 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:29:33.117 15:48:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:29:33.117 15:48:03 -- common/autotest_common.sh@10 -- # set +x 00:29:33.117 15:48:03 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:33.117 15:48:03 -- host/multipath.sh@33 -- # nvmfapp_pid=87074 00:29:33.117 15:48:03 -- host/multipath.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:33.375 [2024-04-26 15:48:03.484553] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:33.375 15:48:03 -- host/multipath.sh@36 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:29:33.632 Malloc0 00:29:33.632 15:48:03 -- host/multipath.sh@38 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:29:34.196 15:48:04 -- host/multipath.sh@39 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:34.197 15:48:04 -- host/multipath.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:34.454 [2024-04-26 15:48:04.632048] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:34.454 15:48:04 -- host/multipath.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:34.711 [2024-04-26 15:48:04.908190] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:34.711 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:34.711 15:48:04 -- host/multipath.sh@44 -- # bdevperf_pid=87182 00:29:34.711 15:48:04 -- host/multipath.sh@43 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:29:34.711 15:48:04 -- host/multipath.sh@46 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:34.711 15:48:04 -- host/multipath.sh@47 -- # waitforlisten 87182 /var/tmp/bdevperf.sock 00:29:34.711 15:48:04 -- common/autotest_common.sh@817 -- # '[' -z 87182 ']' 00:29:34.711 15:48:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:34.711 15:48:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:29:34.711 15:48:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:34.711 15:48:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:29:34.711 15:48:04 -- common/autotest_common.sh@10 -- # set +x 00:29:36.083 15:48:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:29:36.083 15:48:05 -- common/autotest_common.sh@850 -- # return 0 00:29:36.083 15:48:05 -- host/multipath.sh@51 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:29:36.083 15:48:06 -- host/multipath.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:29:36.341 Nvme0n1 00:29:36.599 15:48:06 -- host/multipath.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:29:36.856 Nvme0n1 00:29:36.856 15:48:07 -- host/multipath.sh@78 -- # sleep 1 00:29:36.856 15:48:07 -- host/multipath.sh@76 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:29:37.790 15:48:08 -- host/multipath.sh@81 -- # set_ANA_state non_optimized optimized 00:29:37.790 15:48:08 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:29:38.048 15:48:08 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:29:38.306 15:48:08 -- host/multipath.sh@83 -- # confirm_io_on_port optimized 4421 00:29:38.306 15:48:08 -- host/multipath.sh@65 -- # dtrace_pid=87269 00:29:38.306 15:48:08 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 87074 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:29:38.306 15:48:08 -- host/multipath.sh@66 -- # sleep 6 00:29:44.866 15:48:14 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:29:44.866 15:48:14 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:29:44.866 15:48:14 -- host/multipath.sh@67 -- # active_port=4421 00:29:44.866 15:48:14 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:44.866 Attaching 4 probes... 00:29:44.866 @path[10.0.0.2, 4421]: 17637 00:29:44.866 @path[10.0.0.2, 4421]: 17844 00:29:44.866 @path[10.0.0.2, 4421]: 17898 00:29:44.866 @path[10.0.0.2, 4421]: 17795 00:29:44.866 @path[10.0.0.2, 4421]: 17921 00:29:44.866 15:48:14 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:29:44.866 15:48:14 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:29:44.866 15:48:14 -- host/multipath.sh@69 -- # sed -n 1p 00:29:44.866 15:48:14 -- host/multipath.sh@69 -- # port=4421 00:29:44.866 15:48:14 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:29:44.866 15:48:14 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:29:44.866 15:48:14 -- host/multipath.sh@72 -- # kill 87269 00:29:44.866 15:48:14 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:44.866 15:48:14 -- host/multipath.sh@86 -- # set_ANA_state non_optimized inaccessible 00:29:44.866 15:48:14 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:29:44.866 15:48:15 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:29:45.431 15:48:15 -- host/multipath.sh@87 -- # confirm_io_on_port non_optimized 4420 00:29:45.431 15:48:15 -- host/multipath.sh@65 -- # dtrace_pid=87401 00:29:45.431 15:48:15 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 87074 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:29:45.431 15:48:15 -- host/multipath.sh@66 -- # sleep 6 00:29:52.002 15:48:21 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:29:52.002 15:48:21 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:29:52.002 15:48:21 -- host/multipath.sh@67 -- # active_port=4420 00:29:52.002 15:48:21 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:52.002 Attaching 4 probes... 00:29:52.002 @path[10.0.0.2, 4420]: 17070 00:29:52.002 @path[10.0.0.2, 4420]: 17273 00:29:52.002 @path[10.0.0.2, 4420]: 17291 00:29:52.002 @path[10.0.0.2, 4420]: 16883 00:29:52.002 @path[10.0.0.2, 4420]: 16995 00:29:52.002 15:48:21 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:29:52.002 15:48:21 -- host/multipath.sh@69 -- # sed -n 1p 00:29:52.002 15:48:21 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:29:52.002 15:48:21 -- host/multipath.sh@69 -- # port=4420 00:29:52.002 15:48:21 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:29:52.002 15:48:21 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:29:52.002 15:48:21 -- host/multipath.sh@72 -- # kill 87401 00:29:52.002 15:48:21 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:52.002 15:48:21 -- host/multipath.sh@89 -- # set_ANA_state inaccessible optimized 00:29:52.002 15:48:21 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:29:52.002 15:48:22 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:29:52.002 15:48:22 -- host/multipath.sh@90 -- # confirm_io_on_port optimized 4421 00:29:52.002 15:48:22 -- host/multipath.sh@65 -- # dtrace_pid=87538 00:29:52.002 15:48:22 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 87074 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:29:52.002 15:48:22 -- host/multipath.sh@66 -- # sleep 6 00:29:58.559 15:48:28 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:29:58.559 15:48:28 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:29:58.559 15:48:28 -- host/multipath.sh@67 -- # active_port=4421 00:29:58.559 15:48:28 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:58.559 Attaching 4 probes... 00:29:58.559 @path[10.0.0.2, 4421]: 13190 00:29:58.559 @path[10.0.0.2, 4421]: 17431 00:29:58.559 @path[10.0.0.2, 4421]: 17558 00:29:58.559 @path[10.0.0.2, 4421]: 17184 00:29:58.559 @path[10.0.0.2, 4421]: 17527 00:29:58.559 15:48:28 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:29:58.559 15:48:28 -- host/multipath.sh@69 -- # sed -n 1p 00:29:58.559 15:48:28 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:29:58.559 15:48:28 -- host/multipath.sh@69 -- # port=4421 00:29:58.559 15:48:28 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:29:58.559 15:48:28 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:29:58.559 15:48:28 -- host/multipath.sh@72 -- # kill 87538 00:29:58.559 15:48:28 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:29:58.560 15:48:28 -- host/multipath.sh@93 -- # set_ANA_state inaccessible inaccessible 00:29:58.560 15:48:28 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:29:58.818 15:48:28 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:29:59.075 15:48:29 -- host/multipath.sh@94 -- # confirm_io_on_port '' '' 00:29:59.075 15:48:29 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 87074 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:29:59.075 15:48:29 -- host/multipath.sh@65 -- # dtrace_pid=87669 00:29:59.075 15:48:29 -- host/multipath.sh@66 -- # sleep 6 00:30:05.629 15:48:35 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:30:05.629 15:48:35 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="") | .address.trsvcid' 00:30:05.629 15:48:35 -- host/multipath.sh@67 -- # active_port= 00:30:05.629 15:48:35 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:30:05.629 Attaching 4 probes... 00:30:05.629 00:30:05.629 00:30:05.629 00:30:05.629 00:30:05.629 00:30:05.629 15:48:35 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:30:05.629 15:48:35 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:30:05.629 15:48:35 -- host/multipath.sh@69 -- # sed -n 1p 00:30:05.629 15:48:35 -- host/multipath.sh@69 -- # port= 00:30:05.629 15:48:35 -- host/multipath.sh@70 -- # [[ '' == '' ]] 00:30:05.629 15:48:35 -- host/multipath.sh@71 -- # [[ '' == '' ]] 00:30:05.629 15:48:35 -- host/multipath.sh@72 -- # kill 87669 00:30:05.629 15:48:35 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:30:05.629 15:48:35 -- host/multipath.sh@96 -- # set_ANA_state non_optimized optimized 00:30:05.629 15:48:35 -- host/multipath.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:30:05.629 15:48:35 -- host/multipath.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:05.887 15:48:35 -- host/multipath.sh@97 -- # confirm_io_on_port optimized 4421 00:30:05.887 15:48:35 -- host/multipath.sh@65 -- # dtrace_pid=87800 00:30:05.887 15:48:35 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 87074 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:30:05.887 15:48:35 -- host/multipath.sh@66 -- # sleep 6 00:30:12.441 15:48:41 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:30:12.441 15:48:41 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:30:12.441 15:48:42 -- host/multipath.sh@67 -- # active_port=4421 00:30:12.441 15:48:42 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:30:12.441 Attaching 4 probes... 00:30:12.441 @path[10.0.0.2, 4421]: 16986 00:30:12.441 @path[10.0.0.2, 4421]: 17474 00:30:12.441 @path[10.0.0.2, 4421]: 17471 00:30:12.441 @path[10.0.0.2, 4421]: 17302 00:30:12.441 @path[10.0.0.2, 4421]: 17285 00:30:12.441 15:48:42 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:30:12.441 15:48:42 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:30:12.441 15:48:42 -- host/multipath.sh@69 -- # sed -n 1p 00:30:12.441 15:48:42 -- host/multipath.sh@69 -- # port=4421 00:30:12.441 15:48:42 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:30:12.441 15:48:42 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:30:12.441 15:48:42 -- host/multipath.sh@72 -- # kill 87800 00:30:12.441 15:48:42 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:30:12.441 15:48:42 -- host/multipath.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:12.441 [2024-04-26 15:48:42.459012] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1b10 is same with the state(5) to be set 00:30:12.441 [2024-04-26 15:48:42.459069] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1b10 is same with the state(5) to be set 00:30:12.441 [2024-04-26 15:48:42.459083] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1b10 is same with the state(5) to be set 00:30:12.441 [2024-04-26 15:48:42.459092] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1b10 is same with the state(5) to be set 00:30:12.441 [2024-04-26 15:48:42.459100] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1b10 is same with the state(5) to be set 00:30:12.441 [2024-04-26 15:48:42.459108] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1b10 is same with the state(5) to be set 00:30:12.441 [2024-04-26 15:48:42.459118] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1b10 is same with the state(5) to be set 00:30:12.441 [2024-04-26 15:48:42.459127] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1b10 is same with the state(5) to be set 00:30:12.441 [2024-04-26 15:48:42.459149] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1b10 is same with the state(5) to be set 00:30:12.441 [2024-04-26 15:48:42.459160] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1b10 is same with the state(5) to be set 00:30:12.441 [2024-04-26 15:48:42.459169] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1b10 is same with the state(5) to be set 00:30:12.441 [2024-04-26 15:48:42.459178] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1b10 is same with the state(5) to be set 00:30:12.441 [2024-04-26 15:48:42.459188] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1b10 is same with the state(5) to be set 00:30:12.441 [2024-04-26 15:48:42.459197] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1b10 is same with the state(5) to be set 00:30:12.441 [2024-04-26 15:48:42.459205] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df1b10 is same with the state(5) to be set 00:30:12.441 15:48:42 -- host/multipath.sh@101 -- # sleep 1 00:30:13.374 15:48:43 -- host/multipath.sh@104 -- # confirm_io_on_port non_optimized 4420 00:30:13.374 15:48:43 -- host/multipath.sh@65 -- # dtrace_pid=87934 00:30:13.374 15:48:43 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 87074 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:30:13.374 15:48:43 -- host/multipath.sh@66 -- # sleep 6 00:30:19.942 15:48:49 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="non_optimized") | .address.trsvcid' 00:30:19.942 15:48:49 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:30:19.942 15:48:49 -- host/multipath.sh@67 -- # active_port=4420 00:30:19.942 15:48:49 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:30:19.942 Attaching 4 probes... 00:30:19.942 @path[10.0.0.2, 4420]: 17156 00:30:19.942 @path[10.0.0.2, 4420]: 16960 00:30:19.942 @path[10.0.0.2, 4420]: 17164 00:30:19.942 @path[10.0.0.2, 4420]: 17366 00:30:19.942 @path[10.0.0.2, 4420]: 17168 00:30:19.942 15:48:49 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:30:19.942 15:48:49 -- host/multipath.sh@69 -- # sed -n 1p 00:30:19.942 15:48:49 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:30:19.942 15:48:49 -- host/multipath.sh@69 -- # port=4420 00:30:19.942 15:48:49 -- host/multipath.sh@70 -- # [[ 4420 == \4\4\2\0 ]] 00:30:19.942 15:48:49 -- host/multipath.sh@71 -- # [[ 4420 == \4\4\2\0 ]] 00:30:19.942 15:48:49 -- host/multipath.sh@72 -- # kill 87934 00:30:19.942 15:48:49 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:30:19.942 15:48:49 -- host/multipath.sh@107 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:19.942 [2024-04-26 15:48:50.004657] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:19.942 15:48:50 -- host/multipath.sh@108 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:30:20.201 15:48:50 -- host/multipath.sh@111 -- # sleep 6 00:30:26.755 15:48:56 -- host/multipath.sh@112 -- # confirm_io_on_port optimized 4421 00:30:26.755 15:48:56 -- host/multipath.sh@65 -- # dtrace_pid=88128 00:30:26.755 15:48:56 -- host/multipath.sh@64 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 87074 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_path.bt 00:30:26.755 15:48:56 -- host/multipath.sh@66 -- # sleep 6 00:30:32.013 15:49:02 -- host/multipath.sh@67 -- # jq -r '.[] | select (.ana_states[0].ana_state=="optimized") | .address.trsvcid' 00:30:32.271 15:49:02 -- host/multipath.sh@67 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_get_listeners nqn.2016-06.io.spdk:cnode1 00:30:32.271 15:49:02 -- host/multipath.sh@67 -- # active_port=4421 00:30:32.271 15:49:02 -- host/multipath.sh@68 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:30:32.271 Attaching 4 probes... 00:30:32.271 @path[10.0.0.2, 4421]: 16390 00:30:32.271 @path[10.0.0.2, 4421]: 17075 00:30:32.271 @path[10.0.0.2, 4421]: 16926 00:30:32.271 @path[10.0.0.2, 4421]: 16644 00:30:32.271 @path[10.0.0.2, 4421]: 16853 00:30:32.271 15:49:02 -- host/multipath.sh@69 -- # cut -d ']' -f1 00:30:32.271 15:49:02 -- host/multipath.sh@69 -- # awk '$1=="@path[10.0.0.2," {print $2}' 00:30:32.271 15:49:02 -- host/multipath.sh@69 -- # sed -n 1p 00:30:32.271 15:49:02 -- host/multipath.sh@69 -- # port=4421 00:30:32.271 15:49:02 -- host/multipath.sh@70 -- # [[ 4421 == \4\4\2\1 ]] 00:30:32.271 15:49:02 -- host/multipath.sh@71 -- # [[ 4421 == \4\4\2\1 ]] 00:30:32.271 15:49:02 -- host/multipath.sh@72 -- # kill 88128 00:30:32.271 15:49:02 -- host/multipath.sh@73 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:30:32.271 15:49:02 -- host/multipath.sh@114 -- # killprocess 87182 00:30:32.271 15:49:02 -- common/autotest_common.sh@936 -- # '[' -z 87182 ']' 00:30:32.271 15:49:02 -- common/autotest_common.sh@940 -- # kill -0 87182 00:30:32.271 15:49:02 -- common/autotest_common.sh@941 -- # uname 00:30:32.271 15:49:02 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:32.271 15:49:02 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87182 00:30:32.530 killing process with pid 87182 00:30:32.530 15:49:02 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:30:32.530 15:49:02 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:30:32.530 15:49:02 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87182' 00:30:32.530 15:49:02 -- common/autotest_common.sh@955 -- # kill 87182 00:30:32.530 15:49:02 -- common/autotest_common.sh@960 -- # wait 87182 00:30:32.530 Connection closed with partial response: 00:30:32.530 00:30:32.530 00:30:32.797 15:49:02 -- host/multipath.sh@116 -- # wait 87182 00:30:32.797 15:49:02 -- host/multipath.sh@118 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:30:32.797 [2024-04-26 15:48:04.971767] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:30:32.798 [2024-04-26 15:48:04.971879] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87182 ] 00:30:32.798 [2024-04-26 15:48:05.106288] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:32.798 [2024-04-26 15:48:05.227680] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:32.798 Running I/O for 90 seconds... 00:30:32.798 [2024-04-26 15:48:15.396708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:71376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.798 [2024-04-26 15:48:15.396856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:32.798 [2024-04-26 15:48:15.396946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:71384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.798 [2024-04-26 15:48:15.396975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:32.798 [2024-04-26 15:48:15.397005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:71392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.798 [2024-04-26 15:48:15.397026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:32.798 [2024-04-26 15:48:15.397053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:71400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.798 [2024-04-26 15:48:15.397072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:32.798 [2024-04-26 15:48:15.397100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:71408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.798 [2024-04-26 15:48:15.397120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.798 [2024-04-26 15:48:15.397165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:71416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.798 [2024-04-26 15:48:15.397189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:32.798 [2024-04-26 15:48:15.397216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:71424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.798 [2024-04-26 15:48:15.397236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:32.798 [2024-04-26 15:48:15.397263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:71432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.798 [2024-04-26 15:48:15.397282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:32.798 [2024-04-26 15:48:15.397309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:71952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.798 [2024-04-26 15:48:15.397328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:32.798 [2024-04-26 15:48:15.397355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:71960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.798 [2024-04-26 15:48:15.397374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:32.798 [2024-04-26 15:48:15.397400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:71968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.798 [2024-04-26 15:48:15.397449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:32.798 [2024-04-26 15:48:15.397478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:71976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.798 [2024-04-26 15:48:15.397499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:32.798 [2024-04-26 15:48:15.397526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:71984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.798 [2024-04-26 15:48:15.397545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:32.798 [2024-04-26 15:48:15.397571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:71992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.798 [2024-04-26 15:48:15.397590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:32.798 [2024-04-26 15:48:15.397617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:72000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.798 [2024-04-26 15:48:15.397637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:32.798 [2024-04-26 15:48:15.398307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:72008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.798 [2024-04-26 15:48:15.398339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:32.798 [2024-04-26 15:48:15.398372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:72016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.798 [2024-04-26 15:48:15.398394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:32.798 [2024-04-26 15:48:15.398422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:71440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.798 [2024-04-26 15:48:15.398443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:32.798 [2024-04-26 15:48:15.398470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:71448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.798 [2024-04-26 15:48:15.398489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:32.798 [2024-04-26 15:48:15.398515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:71456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.798 [2024-04-26 15:48:15.398535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:32.798 [2024-04-26 15:48:15.398562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:71464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.798 [2024-04-26 15:48:15.398581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:32.798 [2024-04-26 15:48:15.398608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:71472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.798 [2024-04-26 15:48:15.398627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:32.798 [2024-04-26 15:48:15.398654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:71480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.798 [2024-04-26 15:48:15.398673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:32.798 [2024-04-26 15:48:15.398717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:71488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.798 [2024-04-26 15:48:15.398738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:32.798 [2024-04-26 15:48:15.398765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:71496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.798 [2024-04-26 15:48:15.398785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:32.798 [2024-04-26 15:48:15.398813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:71504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.798 [2024-04-26 15:48:15.398832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:32.798 [2024-04-26 15:48:15.398859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:71512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.798 [2024-04-26 15:48:15.398878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:32.798 [2024-04-26 15:48:15.398905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:71520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.798 [2024-04-26 15:48:15.398924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:32.798 [2024-04-26 15:48:15.398950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:71528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.798 [2024-04-26 15:48:15.398970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:32.798 [2024-04-26 15:48:15.398996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:71536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.798 [2024-04-26 15:48:15.399014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:32.798 [2024-04-26 15:48:15.399041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:71544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.798 [2024-04-26 15:48:15.399061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:32.799 [2024-04-26 15:48:15.399087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:71552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.799 [2024-04-26 15:48:15.399106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:32.799 [2024-04-26 15:48:15.399133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:71560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.799 [2024-04-26 15:48:15.399171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:32.799 [2024-04-26 15:48:15.399201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:71568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.799 [2024-04-26 15:48:15.399220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:32.799 [2024-04-26 15:48:15.399247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:71576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.799 [2024-04-26 15:48:15.399287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:32.799 [2024-04-26 15:48:15.399344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:71584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.799 [2024-04-26 15:48:15.399367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:32.799 [2024-04-26 15:48:15.399405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:71592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.799 [2024-04-26 15:48:15.399429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.799 [2024-04-26 15:48:15.399457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:71600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.799 [2024-04-26 15:48:15.399477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:32.799 [2024-04-26 15:48:15.399504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:71608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.799 [2024-04-26 15:48:15.399524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:32.799 [2024-04-26 15:48:15.399551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:71616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.799 [2024-04-26 15:48:15.399570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:32.799 [2024-04-26 15:48:15.399596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:71624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.799 [2024-04-26 15:48:15.399615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:32.799 [2024-04-26 15:48:15.399642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:71632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.799 [2024-04-26 15:48:15.399662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:32.799 [2024-04-26 15:48:15.399688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:71640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.799 [2024-04-26 15:48:15.399708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:32.799 [2024-04-26 15:48:15.399735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:71648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.799 [2024-04-26 15:48:15.399755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:32.799 [2024-04-26 15:48:15.399782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:71656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.799 [2024-04-26 15:48:15.399802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:32.799 [2024-04-26 15:48:15.399828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:71664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.799 [2024-04-26 15:48:15.399847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:32.799 [2024-04-26 15:48:15.399874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:71672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.799 [2024-04-26 15:48:15.399893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:32.799 [2024-04-26 15:48:15.399920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:71680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.799 [2024-04-26 15:48:15.399949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:32.799 [2024-04-26 15:48:15.399978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:71688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.799 [2024-04-26 15:48:15.400000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:32.799 [2024-04-26 15:48:15.400027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:71696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.799 [2024-04-26 15:48:15.400047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:32.799 [2024-04-26 15:48:15.400074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:71704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.799 [2024-04-26 15:48:15.400094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:32.799 [2024-04-26 15:48:15.400122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:71712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.799 [2024-04-26 15:48:15.400157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:32.799 [2024-04-26 15:48:15.400188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:71720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.799 [2024-04-26 15:48:15.400209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:32.799 [2024-04-26 15:48:15.400235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:71728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.799 [2024-04-26 15:48:15.400255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:32.799 [2024-04-26 15:48:15.400281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:71736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.799 [2024-04-26 15:48:15.400301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:32.799 [2024-04-26 15:48:15.400327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:71744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.799 [2024-04-26 15:48:15.400377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:32.799 [2024-04-26 15:48:15.400409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:71752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.799 [2024-04-26 15:48:15.400429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:32.799 [2024-04-26 15:48:15.400456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:71760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.799 [2024-04-26 15:48:15.400480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:32.799 [2024-04-26 15:48:15.400506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:71768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.799 [2024-04-26 15:48:15.400526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:32.799 [2024-04-26 15:48:15.400553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:71776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.799 [2024-04-26 15:48:15.400582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:32.799 [2024-04-26 15:48:15.400611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:71784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.799 [2024-04-26 15:48:15.400631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:32.799 [2024-04-26 15:48:15.400658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:71792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.800 [2024-04-26 15:48:15.400677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:32.800 [2024-04-26 15:48:15.400704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:71800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.800 [2024-04-26 15:48:15.400723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:32.800 [2024-04-26 15:48:15.400750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:71808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.800 [2024-04-26 15:48:15.400770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:32.800 [2024-04-26 15:48:15.400797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:71816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.800 [2024-04-26 15:48:15.400816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:32.800 [2024-04-26 15:48:15.400844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:71824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.800 [2024-04-26 15:48:15.400864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:32.800 [2024-04-26 15:48:15.400891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:71832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.800 [2024-04-26 15:48:15.400910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:32.800 [2024-04-26 15:48:15.400936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:71840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.800 [2024-04-26 15:48:15.400956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:32.800 [2024-04-26 15:48:15.400983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:71848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.800 [2024-04-26 15:48:15.401002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.800 [2024-04-26 15:48:15.401028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:71856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.800 [2024-04-26 15:48:15.401048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:32.800 [2024-04-26 15:48:15.401074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:71864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.800 [2024-04-26 15:48:15.401094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:32.800 [2024-04-26 15:48:15.401120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:71872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.800 [2024-04-26 15:48:15.401154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:32.800 [2024-04-26 15:48:15.401195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:71880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.800 [2024-04-26 15:48:15.401217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:32.800 [2024-04-26 15:48:15.401245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:71888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.800 [2024-04-26 15:48:15.401264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:32.800 [2024-04-26 15:48:15.401291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:71896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.800 [2024-04-26 15:48:15.401311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:32.800 [2024-04-26 15:48:15.401337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:71904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.800 [2024-04-26 15:48:15.401356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:32.800 [2024-04-26 15:48:15.401383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:71912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.800 [2024-04-26 15:48:15.401402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:32.800 [2024-04-26 15:48:15.401429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:71920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.800 [2024-04-26 15:48:15.401448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:32.800 [2024-04-26 15:48:15.401474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:71928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.800 [2024-04-26 15:48:15.401494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:32.800 [2024-04-26 15:48:15.401521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:71936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.800 [2024-04-26 15:48:15.401540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:32.800 [2024-04-26 15:48:15.401567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:71944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.800 [2024-04-26 15:48:15.401587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:32.800 [2024-04-26 15:48:15.401614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:72024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.800 [2024-04-26 15:48:15.401634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:32.800 [2024-04-26 15:48:15.401661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:72032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.800 [2024-04-26 15:48:15.401681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:32.800 [2024-04-26 15:48:15.401708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:72040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.800 [2024-04-26 15:48:15.401726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:32.800 [2024-04-26 15:48:15.401762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:72048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.800 [2024-04-26 15:48:15.401782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:32.800 [2024-04-26 15:48:15.401810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:72056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.800 [2024-04-26 15:48:15.401830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:32.800 [2024-04-26 15:48:15.401857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.800 [2024-04-26 15:48:15.401876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:32.800 [2024-04-26 15:48:15.401903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:72072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.800 [2024-04-26 15:48:15.401925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:32.800 [2024-04-26 15:48:15.401951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:72080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.800 [2024-04-26 15:48:15.401971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:32.800 [2024-04-26 15:48:15.401998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.800 [2024-04-26 15:48:15.402017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:32.800 [2024-04-26 15:48:15.402044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.800 [2024-04-26 15:48:15.402063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:32.800 [2024-04-26 15:48:15.402089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:72104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.800 [2024-04-26 15:48:15.402108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:32.800 [2024-04-26 15:48:15.402152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:72112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.800 [2024-04-26 15:48:15.402176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:32.800 [2024-04-26 15:48:15.402204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:72120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.800 [2024-04-26 15:48:15.402225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:32.800 [2024-04-26 15:48:15.402258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:72128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.800 [2024-04-26 15:48:15.402278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.800 [2024-04-26 15:48:15.402304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:72136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.800 [2024-04-26 15:48:15.402324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:32.800 [2024-04-26 15:48:15.402351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:72144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.800 [2024-04-26 15:48:15.402380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:32.800 [2024-04-26 15:48:15.403456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:72152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.800 [2024-04-26 15:48:15.403492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:32.800 [2024-04-26 15:48:15.403525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:72160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.800 [2024-04-26 15:48:15.403547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:32.800 [2024-04-26 15:48:15.403577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:72168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.801 [2024-04-26 15:48:15.403598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.801 [2024-04-26 15:48:15.403625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:72176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.801 [2024-04-26 15:48:15.403645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.801 [2024-04-26 15:48:15.403672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:72184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.801 [2024-04-26 15:48:15.403692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:32.801 [2024-04-26 15:48:15.403719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:72192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.801 [2024-04-26 15:48:15.403739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:32.801 [2024-04-26 15:48:15.403765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:72200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.801 [2024-04-26 15:48:15.403785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:32.801 [2024-04-26 15:48:15.403811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:72208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.801 [2024-04-26 15:48:15.403831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:32.801 [2024-04-26 15:48:15.403857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:72216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.801 [2024-04-26 15:48:15.403877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:32.801 [2024-04-26 15:48:15.403904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:72224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.801 [2024-04-26 15:48:15.403923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:32.801 [2024-04-26 15:48:15.403951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:72232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.801 [2024-04-26 15:48:15.403970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:32.801 [2024-04-26 15:48:15.403997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:72240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.801 [2024-04-26 15:48:15.404030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:32.801 [2024-04-26 15:48:15.404059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:72248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.801 [2024-04-26 15:48:15.404079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:32.801 [2024-04-26 15:48:15.404106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:72256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.801 [2024-04-26 15:48:15.404126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:32.801 [2024-04-26 15:48:15.404174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:72264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.801 [2024-04-26 15:48:15.404195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:32.801 [2024-04-26 15:48:15.404221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:72272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.801 [2024-04-26 15:48:15.404241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:32.801 [2024-04-26 15:48:15.404268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:72280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.801 [2024-04-26 15:48:15.404288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:32.801 [2024-04-26 15:48:15.404314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:72288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.801 [2024-04-26 15:48:15.404362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:32.801 [2024-04-26 15:48:15.404394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:72296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.801 [2024-04-26 15:48:15.404414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:32.801 [2024-04-26 15:48:15.404442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:72304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.801 [2024-04-26 15:48:15.404465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:32.801 [2024-04-26 15:48:15.404499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:72312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.801 [2024-04-26 15:48:15.404519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:32.801 [2024-04-26 15:48:15.404545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:72320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.801 [2024-04-26 15:48:15.404565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:32.801 [2024-04-26 15:48:15.404591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:72328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.801 [2024-04-26 15:48:15.404610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:32.801 [2024-04-26 15:48:15.404637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:72336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.801 [2024-04-26 15:48:15.404656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:32.801 [2024-04-26 15:48:15.404696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:72344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.801 [2024-04-26 15:48:15.404716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:32.801 [2024-04-26 15:48:15.404743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:72352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.801 [2024-04-26 15:48:15.404762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:32.801 [2024-04-26 15:48:15.404788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:72360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.801 [2024-04-26 15:48:15.404807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:32.801 [2024-04-26 15:48:15.404834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:72368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.801 [2024-04-26 15:48:15.404853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:32.801 [2024-04-26 15:48:15.404879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:72376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.801 [2024-04-26 15:48:15.404898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:32.801 [2024-04-26 15:48:15.404925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:72384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.801 [2024-04-26 15:48:15.404944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:32.801 [2024-04-26 15:48:15.404971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:72392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.801 [2024-04-26 15:48:15.404991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:32.801 [2024-04-26 15:48:21.994100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:130928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.801 [2024-04-26 15:48:21.994240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.801 [2024-04-26 15:48:21.994322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:130936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.801 [2024-04-26 15:48:21.994350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.801 [2024-04-26 15:48:21.994379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:130944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.801 [2024-04-26 15:48:21.994398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:32.801 [2024-04-26 15:48:21.994425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:130952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.801 [2024-04-26 15:48:21.994444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:32.801 [2024-04-26 15:48:21.994470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:130960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.801 [2024-04-26 15:48:21.994489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:32.801 [2024-04-26 15:48:21.994557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:130968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.801 [2024-04-26 15:48:21.994578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:32.801 [2024-04-26 15:48:21.994604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:130976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.801 [2024-04-26 15:48:21.994623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:32.801 [2024-04-26 15:48:21.994650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:130984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.801 [2024-04-26 15:48:21.994669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:32.801 [2024-04-26 15:48:21.994696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:130992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.801 [2024-04-26 15:48:21.994715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:32.802 [2024-04-26 15:48:21.994742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:131000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.802 [2024-04-26 15:48:21.994760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:32.802 [2024-04-26 15:48:21.994786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:131008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.802 [2024-04-26 15:48:21.994804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:32.802 [2024-04-26 15:48:21.994830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:131016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.802 [2024-04-26 15:48:21.994848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:32.802 [2024-04-26 15:48:21.994874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:131024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.802 [2024-04-26 15:48:21.994892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:32.802 [2024-04-26 15:48:21.994917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:131032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.802 [2024-04-26 15:48:21.994936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:32.802 [2024-04-26 15:48:21.994961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:131040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.802 [2024-04-26 15:48:21.994980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:32.802 [2024-04-26 15:48:21.995006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:131048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.802 [2024-04-26 15:48:21.995025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:32.802 [2024-04-26 15:48:21.995050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:131056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.802 [2024-04-26 15:48:21.995070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:32.802 [2024-04-26 15:48:21.995109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:131064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.802 [2024-04-26 15:48:21.995131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:32.802 [2024-04-26 15:48:21.995179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:0 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.802 [2024-04-26 15:48:21.995201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:32.802 [2024-04-26 15:48:21.995228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:8 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.802 [2024-04-26 15:48:21.995247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:32.802 [2024-04-26 15:48:21.995274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:16 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.802 [2024-04-26 15:48:21.995294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:32.802 [2024-04-26 15:48:21.995321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:24 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.802 [2024-04-26 15:48:21.995341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:32.802 [2024-04-26 15:48:21.995367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:32 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.802 [2024-04-26 15:48:21.995386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:32.802 [2024-04-26 15:48:21.995412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:40 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.802 [2024-04-26 15:48:21.995430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:32.802 [2024-04-26 15:48:21.995457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:48 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.802 [2024-04-26 15:48:21.995475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:32.802 [2024-04-26 15:48:21.995502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:56 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.802 [2024-04-26 15:48:21.995536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:32.802 [2024-04-26 15:48:21.995580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:64 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.802 [2024-04-26 15:48:21.995605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:32.802 [2024-04-26 15:48:21.995632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:72 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.802 [2024-04-26 15:48:21.995652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:32.802 [2024-04-26 15:48:21.995679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:80 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.802 [2024-04-26 15:48:21.995698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:32.802 [2024-04-26 15:48:21.995724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:88 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.802 [2024-04-26 15:48:21.995755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:32.802 [2024-04-26 15:48:21.995783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:96 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.802 [2024-04-26 15:48:21.995802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:32.802 [2024-04-26 15:48:21.995829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.802 [2024-04-26 15:48:21.995849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:32.802 [2024-04-26 15:48:21.995876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.802 [2024-04-26 15:48:21.995895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:32.802 [2024-04-26 15:48:21.995921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.802 [2024-04-26 15:48:21.995940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.802 [2024-04-26 15:48:21.995967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.802 [2024-04-26 15:48:21.995986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:32.802 [2024-04-26 15:48:21.996012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.802 [2024-04-26 15:48:21.996031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:32.802 [2024-04-26 15:48:21.996057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.802 [2024-04-26 15:48:21.996077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:32.802 [2024-04-26 15:48:21.996103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.802 [2024-04-26 15:48:21.996122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:32.802 [2024-04-26 15:48:21.996172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.802 [2024-04-26 15:48:21.996195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:32.802 [2024-04-26 15:48:21.996222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.802 [2024-04-26 15:48:21.996241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:32.802 [2024-04-26 15:48:21.996266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.803 [2024-04-26 15:48:21.996286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:32.803 [2024-04-26 15:48:21.996311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.803 [2024-04-26 15:48:21.996330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:32.803 [2024-04-26 15:48:21.996387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.803 [2024-04-26 15:48:21.996410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:32.803 [2024-04-26 15:48:21.996437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.803 [2024-04-26 15:48:21.996456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:32.803 [2024-04-26 15:48:21.996483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.803 [2024-04-26 15:48:21.996502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:32.803 [2024-04-26 15:48:21.996528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.803 [2024-04-26 15:48:21.996547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:32.803 [2024-04-26 15:48:21.996574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.803 [2024-04-26 15:48:21.996593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:32.803 [2024-04-26 15:48:21.996620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.803 [2024-04-26 15:48:21.996639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:32.803 [2024-04-26 15:48:21.996666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.803 [2024-04-26 15:48:21.996685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:32.803 [2024-04-26 15:48:21.996712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.803 [2024-04-26 15:48:21.996731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:32.803 [2024-04-26 15:48:21.996757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.803 [2024-04-26 15:48:21.996777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:32.803 [2024-04-26 15:48:21.996803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.803 [2024-04-26 15:48:21.996822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:32.803 [2024-04-26 15:48:21.996849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.803 [2024-04-26 15:48:21.996868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:32.803 [2024-04-26 15:48:21.996895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.803 [2024-04-26 15:48:21.996914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:32.803 [2024-04-26 15:48:21.996951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.803 [2024-04-26 15:48:21.996972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:32.803 [2024-04-26 15:48:21.996998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.803 [2024-04-26 15:48:21.997017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:32.803 [2024-04-26 15:48:21.997043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.803 [2024-04-26 15:48:21.997063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:32.803 [2024-04-26 15:48:21.997088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.803 [2024-04-26 15:48:21.997107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:32.803 [2024-04-26 15:48:21.997134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.803 [2024-04-26 15:48:21.997171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:32.803 [2024-04-26 15:48:21.997201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.803 [2024-04-26 15:48:21.997221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:32.803 [2024-04-26 15:48:21.997808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.803 [2024-04-26 15:48:21.997841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:32.803 [2024-04-26 15:48:21.997878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.803 [2024-04-26 15:48:21.997904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:32.803 [2024-04-26 15:48:21.997935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.803 [2024-04-26 15:48:21.997956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:32.803 [2024-04-26 15:48:21.997987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.803 [2024-04-26 15:48:21.998007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:32.803 [2024-04-26 15:48:21.998037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.803 [2024-04-26 15:48:21.998057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:32.803 [2024-04-26 15:48:21.998087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.803 [2024-04-26 15:48:21.998107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.803 [2024-04-26 15:48:21.998170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.803 [2024-04-26 15:48:21.998207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:32.803 [2024-04-26 15:48:21.998240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.803 [2024-04-26 15:48:21.998260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:32.803 [2024-04-26 15:48:21.998290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.803 [2024-04-26 15:48:21.998309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:32.803 [2024-04-26 15:48:21.998339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.803 [2024-04-26 15:48:21.998367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:32.803 [2024-04-26 15:48:21.998397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.803 [2024-04-26 15:48:21.998416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:32.803 [2024-04-26 15:48:21.998446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.803 [2024-04-26 15:48:21.998466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:32.803 [2024-04-26 15:48:21.998495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.803 [2024-04-26 15:48:21.998514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:32.803 [2024-04-26 15:48:21.998544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.803 [2024-04-26 15:48:21.998563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:32.803 [2024-04-26 15:48:21.998593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.803 [2024-04-26 15:48:21.998622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:32.803 [2024-04-26 15:48:21.998654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.803 [2024-04-26 15:48:21.998673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:32.803 [2024-04-26 15:48:21.998703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.803 [2024-04-26 15:48:21.998722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:32.803 [2024-04-26 15:48:21.998752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.803 [2024-04-26 15:48:21.998770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:32.803 [2024-04-26 15:48:21.998800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.803 [2024-04-26 15:48:21.998819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:32.804 [2024-04-26 15:48:21.998858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.804 [2024-04-26 15:48:21.998879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:32.804 [2024-04-26 15:48:21.998909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.804 [2024-04-26 15:48:21.998928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:32.804 [2024-04-26 15:48:21.998958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.804 [2024-04-26 15:48:21.998978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:32.804 [2024-04-26 15:48:21.999014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.804 [2024-04-26 15:48:21.999034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:32.804 [2024-04-26 15:48:21.999064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.804 [2024-04-26 15:48:21.999083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:32.804 [2024-04-26 15:48:21.999112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.804 [2024-04-26 15:48:21.999132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:32.804 [2024-04-26 15:48:21.999181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.804 [2024-04-26 15:48:21.999202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:32.804 [2024-04-26 15:48:21.999232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.804 [2024-04-26 15:48:21.999252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:32.804 [2024-04-26 15:48:21.999282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.804 [2024-04-26 15:48:21.999301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:32.804 [2024-04-26 15:48:21.999331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.804 [2024-04-26 15:48:21.999350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:32.804 [2024-04-26 15:48:21.999380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.804 [2024-04-26 15:48:21.999399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:32.804 [2024-04-26 15:48:21.999428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.804 [2024-04-26 15:48:21.999448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:32.804 [2024-04-26 15:48:21.999493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.804 [2024-04-26 15:48:21.999524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:32.804 [2024-04-26 15:48:21.999575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.804 [2024-04-26 15:48:21.999601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:32.804 [2024-04-26 15:48:21.999633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.804 [2024-04-26 15:48:21.999654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:32.804 [2024-04-26 15:48:21.999897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.804 [2024-04-26 15:48:21.999929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:32.804 [2024-04-26 15:48:21.999966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.804 [2024-04-26 15:48:21.999988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:32.804 [2024-04-26 15:48:22.000020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.804 [2024-04-26 15:48:22.000039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:32.804 [2024-04-26 15:48:22.000072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.804 [2024-04-26 15:48:22.000092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.804 [2024-04-26 15:48:22.000132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.804 [2024-04-26 15:48:22.000177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:32.804 [2024-04-26 15:48:22.000211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.804 [2024-04-26 15:48:22.000232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:32.804 [2024-04-26 15:48:22.000264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.804 [2024-04-26 15:48:22.000284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:32.804 [2024-04-26 15:48:22.000316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.804 [2024-04-26 15:48:22.000335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:32.804 [2024-04-26 15:48:22.000387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.804 [2024-04-26 15:48:22.000407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:32.804 [2024-04-26 15:48:22.000445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.804 [2024-04-26 15:48:22.000477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:32.804 [2024-04-26 15:48:22.000512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.804 [2024-04-26 15:48:22.000532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:32.804 [2024-04-26 15:48:22.000564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.804 [2024-04-26 15:48:22.000584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:32.804 [2024-04-26 15:48:22.000616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.804 [2024-04-26 15:48:22.000637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:32.804 [2024-04-26 15:48:22.000670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.804 [2024-04-26 15:48:22.000690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:32.804 [2024-04-26 15:48:22.000721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.804 [2024-04-26 15:48:22.000741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:32.804 [2024-04-26 15:48:22.000774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.804 [2024-04-26 15:48:22.000795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:32.804 [2024-04-26 15:48:22.000826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.804 [2024-04-26 15:48:22.000846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:32.804 [2024-04-26 15:48:22.000879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.804 [2024-04-26 15:48:22.000898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:32.804 [2024-04-26 15:48:22.000929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.804 [2024-04-26 15:48:22.000950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:32.804 [2024-04-26 15:48:22.000982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.804 [2024-04-26 15:48:22.001002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:32.804 [2024-04-26 15:48:22.001035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.804 [2024-04-26 15:48:22.001055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:32.804 [2024-04-26 15:48:22.001087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.804 [2024-04-26 15:48:22.001107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:32.804 [2024-04-26 15:48:22.001165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.804 [2024-04-26 15:48:22.001189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:32.805 [2024-04-26 15:48:22.001224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.805 [2024-04-26 15:48:22.001244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:32.805 [2024-04-26 15:48:22.001276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.805 [2024-04-26 15:48:22.001295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:32.805 [2024-04-26 15:48:22.001327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.805 [2024-04-26 15:48:22.001347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:32.805 [2024-04-26 15:48:22.001379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.805 [2024-04-26 15:48:22.001398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:32.805 [2024-04-26 15:48:22.001430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.805 [2024-04-26 15:48:22.001450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:32.805 [2024-04-26 15:48:22.001482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.805 [2024-04-26 15:48:22.001504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:32.805 [2024-04-26 15:48:22.001537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.805 [2024-04-26 15:48:22.001557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.805 [2024-04-26 15:48:22.001590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.805 [2024-04-26 15:48:22.001609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:32.805 [2024-04-26 15:48:22.001641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.805 [2024-04-26 15:48:22.001661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:32.805 [2024-04-26 15:48:22.001693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.805 [2024-04-26 15:48:22.001712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:32.805 [2024-04-26 15:48:22.001745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.805 [2024-04-26 15:48:22.001765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:32.805 [2024-04-26 15:48:29.110158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:33952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.805 [2024-04-26 15:48:29.110271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:30:32.805 [2024-04-26 15:48:29.110344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:33960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.805 [2024-04-26 15:48:29.110368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:30:32.805 [2024-04-26 15:48:29.110393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:33968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.805 [2024-04-26 15:48:29.110410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:30:32.805 [2024-04-26 15:48:29.110432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:33976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.805 [2024-04-26 15:48:29.110448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:30:32.805 [2024-04-26 15:48:29.110470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:33984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.805 [2024-04-26 15:48:29.110486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:30:32.805 [2024-04-26 15:48:29.110508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:33992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.805 [2024-04-26 15:48:29.110524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:30:32.805 [2024-04-26 15:48:29.110545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:34000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.805 [2024-04-26 15:48:29.110561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:30:32.805 [2024-04-26 15:48:29.110583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:34008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.805 [2024-04-26 15:48:29.110598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:30:32.805 [2024-04-26 15:48:29.110826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:34016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.805 [2024-04-26 15:48:29.110854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:30:32.805 [2024-04-26 15:48:29.110882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.805 [2024-04-26 15:48:29.110900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:30:32.805 [2024-04-26 15:48:29.110923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:34032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.805 [2024-04-26 15:48:29.110939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:30:32.805 [2024-04-26 15:48:29.110961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:34040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.805 [2024-04-26 15:48:29.110977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.805 [2024-04-26 15:48:29.111000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:34048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.805 [2024-04-26 15:48:29.111049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:30:32.805 [2024-04-26 15:48:29.111074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:34056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.805 [2024-04-26 15:48:29.111090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:30:32.805 [2024-04-26 15:48:29.111112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:34064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.805 [2024-04-26 15:48:29.111128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:30:32.805 [2024-04-26 15:48:29.111170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:34072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.805 [2024-04-26 15:48:29.111188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:30:32.805 [2024-04-26 15:48:29.111211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:34080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.805 [2024-04-26 15:48:29.111227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:30:32.805 [2024-04-26 15:48:29.111249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:34088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.805 [2024-04-26 15:48:29.111264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:30:32.805 [2024-04-26 15:48:29.111287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:34096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.805 [2024-04-26 15:48:29.111302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:30:32.805 [2024-04-26 15:48:29.111325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:34104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.805 [2024-04-26 15:48:29.111341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:30:32.805 [2024-04-26 15:48:29.111364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:34112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.805 [2024-04-26 15:48:29.111379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:30:32.805 [2024-04-26 15:48:29.111402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:34120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.805 [2024-04-26 15:48:29.111417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:30:32.805 [2024-04-26 15:48:29.111440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:34128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.805 [2024-04-26 15:48:29.111455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:30:32.805 [2024-04-26 15:48:29.111477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:34136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.805 [2024-04-26 15:48:29.111492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:30:32.805 [2024-04-26 15:48:29.111514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:34144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.805 [2024-04-26 15:48:29.111608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:30:32.805 [2024-04-26 15:48:29.111634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:34152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.805 [2024-04-26 15:48:29.111650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:30:32.805 [2024-04-26 15:48:29.111673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:34160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.805 [2024-04-26 15:48:29.111689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:30:32.805 [2024-04-26 15:48:29.111712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:34168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.806 [2024-04-26 15:48:29.111728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:30:32.806 [2024-04-26 15:48:29.111750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:34176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.806 [2024-04-26 15:48:29.111766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:30:32.806 [2024-04-26 15:48:29.111789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:34184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.806 [2024-04-26 15:48:29.111805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:30:32.806 [2024-04-26 15:48:29.111827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:34192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.806 [2024-04-26 15:48:29.111842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:30:32.806 [2024-04-26 15:48:29.111864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:34200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.806 [2024-04-26 15:48:29.111881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:30:32.806 [2024-04-26 15:48:29.111903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:34208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.806 [2024-04-26 15:48:29.111919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:30:32.806 [2024-04-26 15:48:29.111941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:34216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.806 [2024-04-26 15:48:29.111956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:30:32.806 [2024-04-26 15:48:29.111979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:34224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.806 [2024-04-26 15:48:29.111994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:30:32.806 [2024-04-26 15:48:29.112016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:34232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.806 [2024-04-26 15:48:29.112035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:30:32.806 [2024-04-26 15:48:29.112058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:34240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.806 [2024-04-26 15:48:29.112073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:30:32.806 [2024-04-26 15:48:29.112104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:34248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.806 [2024-04-26 15:48:29.112121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:30:32.806 [2024-04-26 15:48:29.112156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:34256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.806 [2024-04-26 15:48:29.112175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:30:32.806 [2024-04-26 15:48:29.112198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:34264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.806 [2024-04-26 15:48:29.112214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:30:32.806 [2024-04-26 15:48:29.112334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:34272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.806 [2024-04-26 15:48:29.112370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:30:32.806 [2024-04-26 15:48:29.112399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:34280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.806 [2024-04-26 15:48:29.112416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:30:32.806 [2024-04-26 15:48:29.112441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:34288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.806 [2024-04-26 15:48:29.112457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:30:32.806 [2024-04-26 15:48:29.112481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:34296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.806 [2024-04-26 15:48:29.112497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:30:32.806 [2024-04-26 15:48:29.112521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:34304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.806 [2024-04-26 15:48:29.112536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:30:32.806 [2024-04-26 15:48:29.112560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:34312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.806 [2024-04-26 15:48:29.112576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:30:32.806 [2024-04-26 15:48:29.112601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:34320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.806 [2024-04-26 15:48:29.112617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:30:32.806 [2024-04-26 15:48:29.112642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:34328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.806 [2024-04-26 15:48:29.112658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:30:32.806 [2024-04-26 15:48:29.112682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:34336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.806 [2024-04-26 15:48:29.112698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:30:32.806 [2024-04-26 15:48:29.112733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:34344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.806 [2024-04-26 15:48:29.112749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:30:32.806 [2024-04-26 15:48:29.112774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:34352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.806 [2024-04-26 15:48:29.112789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:30:32.806 [2024-04-26 15:48:29.112814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:34360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.806 [2024-04-26 15:48:29.112830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:30:32.806 [2024-04-26 15:48:29.112854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:34368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.806 [2024-04-26 15:48:29.112870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:30:32.806 [2024-04-26 15:48:29.112895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:34376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.806 [2024-04-26 15:48:29.112911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:30:32.806 [2024-04-26 15:48:29.112935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:34384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.806 [2024-04-26 15:48:29.112951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:30:32.806 [2024-04-26 15:48:29.112975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:34392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.806 [2024-04-26 15:48:29.112992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:30:32.806 [2024-04-26 15:48:29.113097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:34400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.806 [2024-04-26 15:48:29.113122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:30:32.806 [2024-04-26 15:48:29.113168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:34408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.806 [2024-04-26 15:48:29.113189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:30:32.806 [2024-04-26 15:48:29.113216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:34416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.806 [2024-04-26 15:48:29.113231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:30:32.806 [2024-04-26 15:48:29.113257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:34424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.806 [2024-04-26 15:48:29.113273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:30:32.806 [2024-04-26 15:48:29.113297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:34432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.806 [2024-04-26 15:48:29.113314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:30:32.806 [2024-04-26 15:48:29.113341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:34440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.806 [2024-04-26 15:48:29.113367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:30:32.806 [2024-04-26 15:48:29.113395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:34448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.806 [2024-04-26 15:48:29.113412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:30:32.806 [2024-04-26 15:48:29.113438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:33512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.806 [2024-04-26 15:48:29.113454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:30:32.806 [2024-04-26 15:48:29.113479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:33520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.806 [2024-04-26 15:48:29.113496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:30:32.806 [2024-04-26 15:48:29.113522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:33528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.806 [2024-04-26 15:48:29.113538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:30:32.807 [2024-04-26 15:48:29.113564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:33536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.807 [2024-04-26 15:48:29.113580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:30:32.807 [2024-04-26 15:48:29.113605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:33544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.807 [2024-04-26 15:48:29.113621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:30:32.807 [2024-04-26 15:48:29.113646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:33552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.807 [2024-04-26 15:48:29.113662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:30:32.807 [2024-04-26 15:48:29.113687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:33560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.807 [2024-04-26 15:48:29.113703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:30:32.807 [2024-04-26 15:48:29.113729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:33568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.807 [2024-04-26 15:48:29.113744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:30:32.807 [2024-04-26 15:48:29.113770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:33576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.807 [2024-04-26 15:48:29.113786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:30:32.807 [2024-04-26 15:48:29.113812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:33584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.807 [2024-04-26 15:48:29.113828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:30:32.807 [2024-04-26 15:48:29.113853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:33592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.807 [2024-04-26 15:48:29.113875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:30:32.807 [2024-04-26 15:48:29.113901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:33600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.807 [2024-04-26 15:48:29.113917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:30:32.807 [2024-04-26 15:48:29.113942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:33608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.807 [2024-04-26 15:48:29.113957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:30:32.807 [2024-04-26 15:48:29.113984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:33616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.807 [2024-04-26 15:48:29.113999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:30:32.807 [2024-04-26 15:48:29.114025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:33624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.807 [2024-04-26 15:48:29.114040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:30:32.807 [2024-04-26 15:48:29.114066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:34456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.807 [2024-04-26 15:48:29.114081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:30:32.807 [2024-04-26 15:48:29.114107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.807 [2024-04-26 15:48:29.114123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:30:32.807 [2024-04-26 15:48:29.114162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:33640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.807 [2024-04-26 15:48:29.114181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:30:32.807 [2024-04-26 15:48:29.114207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:33648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.807 [2024-04-26 15:48:29.114223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:30:32.807 [2024-04-26 15:48:29.114248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:33656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.807 [2024-04-26 15:48:29.114264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:30:32.807 [2024-04-26 15:48:29.114289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:33664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.807 [2024-04-26 15:48:29.114304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:30:32.807 [2024-04-26 15:48:29.114330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:33672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.807 [2024-04-26 15:48:29.114345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:30:32.807 [2024-04-26 15:48:29.114370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:33680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.807 [2024-04-26 15:48:29.114385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:30:32.807 [2024-04-26 15:48:29.114419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:33688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.807 [2024-04-26 15:48:29.114435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:30:32.807 [2024-04-26 15:48:29.114460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:33696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.807 [2024-04-26 15:48:29.114476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:30:32.807 [2024-04-26 15:48:29.114501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.807 [2024-04-26 15:48:29.114516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:30:32.807 [2024-04-26 15:48:29.114541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:33712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.807 [2024-04-26 15:48:29.114557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:30:32.807 [2024-04-26 15:48:29.114582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:33720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.807 [2024-04-26 15:48:29.114598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:30:32.807 [2024-04-26 15:48:29.114622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:33728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.807 [2024-04-26 15:48:29.114638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:30:32.807 [2024-04-26 15:48:29.114663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:33736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.807 [2024-04-26 15:48:29.114679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:30:32.807 [2024-04-26 15:48:29.114704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:33744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.807 [2024-04-26 15:48:29.114720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:30:32.807 [2024-04-26 15:48:29.114746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:33752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.807 [2024-04-26 15:48:29.114762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:30:32.807 [2024-04-26 15:48:29.114787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:33760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.807 [2024-04-26 15:48:29.114802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:30:32.807 [2024-04-26 15:48:29.114828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:33768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.807 [2024-04-26 15:48:29.114843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:30:32.808 [2024-04-26 15:48:29.114868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:33776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.808 [2024-04-26 15:48:29.114884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:30:32.808 [2024-04-26 15:48:29.114916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:33784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.808 [2024-04-26 15:48:29.114932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:30:32.808 [2024-04-26 15:48:29.114956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:33792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.808 [2024-04-26 15:48:29.114973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:30:32.808 [2024-04-26 15:48:29.114998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:33800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.808 [2024-04-26 15:48:29.115014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:30:32.808 [2024-04-26 15:48:29.115039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:33808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.808 [2024-04-26 15:48:29.115054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:30:32.808 [2024-04-26 15:48:29.115080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:33816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.808 [2024-04-26 15:48:29.115095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:30:32.808 [2024-04-26 15:48:29.115121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:33824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.808 [2024-04-26 15:48:29.115149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:30:32.808 [2024-04-26 15:48:29.115178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:33832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.808 [2024-04-26 15:48:29.115195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:30:32.808 [2024-04-26 15:48:29.115221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:33840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.808 [2024-04-26 15:48:29.115236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:30:32.808 [2024-04-26 15:48:29.115261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:33848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.808 [2024-04-26 15:48:29.115277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:30:32.808 [2024-04-26 15:48:29.115301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:33856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.808 [2024-04-26 15:48:29.115316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:30:32.808 [2024-04-26 15:48:29.115341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:33864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.808 [2024-04-26 15:48:29.115357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:30:32.808 [2024-04-26 15:48:29.115383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:33872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.808 [2024-04-26 15:48:29.115398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:30:32.808 [2024-04-26 15:48:29.115425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:33880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.808 [2024-04-26 15:48:29.115448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:30:32.808 [2024-04-26 15:48:29.115684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:33888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.808 [2024-04-26 15:48:29.115710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:30:32.808 [2024-04-26 15:48:29.115743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:33896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.808 [2024-04-26 15:48:29.115760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:30:32.808 [2024-04-26 15:48:29.115789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:33904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.808 [2024-04-26 15:48:29.115805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:30:32.808 [2024-04-26 15:48:29.115834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:33912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.808 [2024-04-26 15:48:29.115849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:30:32.808 [2024-04-26 15:48:29.115878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:33920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.808 [2024-04-26 15:48:29.115893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:30:32.808 [2024-04-26 15:48:29.115922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:33928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.808 [2024-04-26 15:48:29.115938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:30:32.808 [2024-04-26 15:48:29.115967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:33936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.808 [2024-04-26 15:48:29.115983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:30:32.808 [2024-04-26 15:48:29.116012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:33944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.808 [2024-04-26 15:48:29.116040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:30:32.808 [2024-04-26 15:48:29.116070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.808 [2024-04-26 15:48:29.116086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:30:32.808 [2024-04-26 15:48:29.116115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:34472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.808 [2024-04-26 15:48:29.116131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:30:32.808 [2024-04-26 15:48:29.116180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:34480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.808 [2024-04-26 15:48:29.116197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:30:32.808 [2024-04-26 15:48:29.116226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:34488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.808 [2024-04-26 15:48:29.116252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:30:32.808 [2024-04-26 15:48:29.116282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:34496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.808 [2024-04-26 15:48:29.116299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:30:32.808 [2024-04-26 15:48:29.116327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.808 [2024-04-26 15:48:29.116355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:30:32.808 [2024-04-26 15:48:29.116386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:34512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.808 [2024-04-26 15:48:29.116403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:30:32.808 [2024-04-26 15:48:29.116432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:34520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.808 [2024-04-26 15:48:29.116448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:30:32.808 [2024-04-26 15:48:29.116477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:34528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.808 [2024-04-26 15:48:29.116494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:30:32.808 [2024-04-26 15:48:42.459460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:81088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.808 [2024-04-26 15:48:42.459513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.808 [2024-04-26 15:48:42.459540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:81096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.808 [2024-04-26 15:48:42.459556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.808 [2024-04-26 15:48:42.459571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:81104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.808 [2024-04-26 15:48:42.459585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.808 [2024-04-26 15:48:42.459610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:81112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.808 [2024-04-26 15:48:42.459624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.808 [2024-04-26 15:48:42.459639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:81120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.808 [2024-04-26 15:48:42.459652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.808 [2024-04-26 15:48:42.459667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:81128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.808 [2024-04-26 15:48:42.459680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.808 [2024-04-26 15:48:42.459695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:81136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.808 [2024-04-26 15:48:42.459708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.808 [2024-04-26 15:48:42.459748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:81144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.808 [2024-04-26 15:48:42.459763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.809 [2024-04-26 15:48:42.459778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:81152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.809 [2024-04-26 15:48:42.459791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.809 [2024-04-26 15:48:42.459806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:81160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.809 [2024-04-26 15:48:42.459819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.809 [2024-04-26 15:48:42.459834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:81168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.809 [2024-04-26 15:48:42.459847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.809 [2024-04-26 15:48:42.459862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:81176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.809 [2024-04-26 15:48:42.459875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.809 [2024-04-26 15:48:42.459890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:81184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.809 [2024-04-26 15:48:42.459903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.809 [2024-04-26 15:48:42.459918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:81192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.809 [2024-04-26 15:48:42.459932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.809 [2024-04-26 15:48:42.459946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:81200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.809 [2024-04-26 15:48:42.459960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.809 [2024-04-26 15:48:42.459974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:81208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.809 [2024-04-26 15:48:42.459987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.809 [2024-04-26 15:48:42.460002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:81216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.809 [2024-04-26 15:48:42.460017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.809 [2024-04-26 15:48:42.460032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:81224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.809 [2024-04-26 15:48:42.460045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.809 [2024-04-26 15:48:42.460060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:81232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.809 [2024-04-26 15:48:42.460074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.809 [2024-04-26 15:48:42.460090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:81240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.809 [2024-04-26 15:48:42.460115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.809 [2024-04-26 15:48:42.460131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:81248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.809 [2024-04-26 15:48:42.460177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.809 [2024-04-26 15:48:42.460196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:81256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.809 [2024-04-26 15:48:42.460211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.809 [2024-04-26 15:48:42.460226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.809 [2024-04-26 15:48:42.460240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.809 [2024-04-26 15:48:42.460255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:81272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.809 [2024-04-26 15:48:42.460268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.809 [2024-04-26 15:48:42.460283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:81280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.809 [2024-04-26 15:48:42.460297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.809 [2024-04-26 15:48:42.460312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:81288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.809 [2024-04-26 15:48:42.460325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.809 [2024-04-26 15:48:42.460351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:81296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.809 [2024-04-26 15:48:42.460369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.809 [2024-04-26 15:48:42.460385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:81304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.809 [2024-04-26 15:48:42.460398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.809 [2024-04-26 15:48:42.460413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:81312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.809 [2024-04-26 15:48:42.460427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.809 [2024-04-26 15:48:42.460442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:81320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.809 [2024-04-26 15:48:42.460455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.809 [2024-04-26 15:48:42.460470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:81328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.809 [2024-04-26 15:48:42.460484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.809 [2024-04-26 15:48:42.460498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:81336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.809 [2024-04-26 15:48:42.460512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.809 [2024-04-26 15:48:42.460535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:81344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.809 [2024-04-26 15:48:42.460552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.809 [2024-04-26 15:48:42.460567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:81352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.809 [2024-04-26 15:48:42.460581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.809 [2024-04-26 15:48:42.460596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:81360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.809 [2024-04-26 15:48:42.460610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.809 [2024-04-26 15:48:42.460625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.809 [2024-04-26 15:48:42.460638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.809 [2024-04-26 15:48:42.460653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:81376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.809 [2024-04-26 15:48:42.460667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.809 [2024-04-26 15:48:42.460683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:81384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.809 [2024-04-26 15:48:42.460696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.809 [2024-04-26 15:48:42.460722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:81392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.809 [2024-04-26 15:48:42.460735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.809 [2024-04-26 15:48:42.460750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:81400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.809 [2024-04-26 15:48:42.460764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.809 [2024-04-26 15:48:42.460779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:81408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.809 [2024-04-26 15:48:42.460793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.809 [2024-04-26 15:48:42.460808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:81416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.809 [2024-04-26 15:48:42.460821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.809 [2024-04-26 15:48:42.460835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:81424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.809 [2024-04-26 15:48:42.460849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.809 [2024-04-26 15:48:42.460864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:81432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.809 [2024-04-26 15:48:42.460878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.809 [2024-04-26 15:48:42.460892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:81440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.809 [2024-04-26 15:48:42.460906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.809 [2024-04-26 15:48:42.460927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:81448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.809 [2024-04-26 15:48:42.460941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.809 [2024-04-26 15:48:42.460956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:81456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.809 [2024-04-26 15:48:42.460970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.810 [2024-04-26 15:48:42.460985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:80520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.810 [2024-04-26 15:48:42.460999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.810 [2024-04-26 15:48:42.461015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:80528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.810 [2024-04-26 15:48:42.461029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.810 [2024-04-26 15:48:42.461045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.810 [2024-04-26 15:48:42.461058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.810 [2024-04-26 15:48:42.461073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:80544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.810 [2024-04-26 15:48:42.461087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.810 [2024-04-26 15:48:42.461102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:80552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.810 [2024-04-26 15:48:42.461116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.810 [2024-04-26 15:48:42.461131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:80560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.810 [2024-04-26 15:48:42.461167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.810 [2024-04-26 15:48:42.461186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:80568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.810 [2024-04-26 15:48:42.461200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.810 [2024-04-26 15:48:42.461215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:80576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.810 [2024-04-26 15:48:42.461229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.810 [2024-04-26 15:48:42.461243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:80584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.810 [2024-04-26 15:48:42.461262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.810 [2024-04-26 15:48:42.461277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:80592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.810 [2024-04-26 15:48:42.461291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.810 [2024-04-26 15:48:42.461306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:80600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.810 [2024-04-26 15:48:42.461327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.810 [2024-04-26 15:48:42.461343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:80608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.810 [2024-04-26 15:48:42.461357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.810 [2024-04-26 15:48:42.461372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:80616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.810 [2024-04-26 15:48:42.461386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.810 [2024-04-26 15:48:42.461401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:80624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.810 [2024-04-26 15:48:42.461415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.810 [2024-04-26 15:48:42.461430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:80632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.810 [2024-04-26 15:48:42.461443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.810 [2024-04-26 15:48:42.461459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:81464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.810 [2024-04-26 15:48:42.461472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.810 [2024-04-26 15:48:42.461488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:80640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.810 [2024-04-26 15:48:42.461502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.810 [2024-04-26 15:48:42.461517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:80648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.810 [2024-04-26 15:48:42.461531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.810 [2024-04-26 15:48:42.461546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:80656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.810 [2024-04-26 15:48:42.461560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.810 [2024-04-26 15:48:42.461575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:80664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.810 [2024-04-26 15:48:42.461588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.810 [2024-04-26 15:48:42.461603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:80672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.810 [2024-04-26 15:48:42.461617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.810 [2024-04-26 15:48:42.461632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:80680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.810 [2024-04-26 15:48:42.461645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.810 [2024-04-26 15:48:42.461660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:80688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.810 [2024-04-26 15:48:42.461674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.810 [2024-04-26 15:48:42.461695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:80696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.810 [2024-04-26 15:48:42.461709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.810 [2024-04-26 15:48:42.461725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:80704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.810 [2024-04-26 15:48:42.461738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.810 [2024-04-26 15:48:42.461754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:80712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.810 [2024-04-26 15:48:42.461767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.810 [2024-04-26 15:48:42.461782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:80720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.810 [2024-04-26 15:48:42.461795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.810 [2024-04-26 15:48:42.461810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:80728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.810 [2024-04-26 15:48:42.461823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.810 [2024-04-26 15:48:42.461838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:80736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.810 [2024-04-26 15:48:42.461852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.810 [2024-04-26 15:48:42.461867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:80744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.810 [2024-04-26 15:48:42.461880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.810 [2024-04-26 15:48:42.461896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:80752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.810 [2024-04-26 15:48:42.461909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.810 [2024-04-26 15:48:42.461924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:80760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.810 [2024-04-26 15:48:42.461938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.810 [2024-04-26 15:48:42.461954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:80768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.810 [2024-04-26 15:48:42.461967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.810 [2024-04-26 15:48:42.461982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:80776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.810 [2024-04-26 15:48:42.461996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.810 [2024-04-26 15:48:42.462010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:80784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.810 [2024-04-26 15:48:42.462024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.810 [2024-04-26 15:48:42.462039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:80792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.810 [2024-04-26 15:48:42.462058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.810 [2024-04-26 15:48:42.462073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:80800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.810 [2024-04-26 15:48:42.462087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.810 [2024-04-26 15:48:42.462102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:80808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.810 [2024-04-26 15:48:42.462116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.810 [2024-04-26 15:48:42.462131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:80816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.810 [2024-04-26 15:48:42.462167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.810 [2024-04-26 15:48:42.462186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:80824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.811 [2024-04-26 15:48:42.462200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.811 [2024-04-26 15:48:42.462215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:80832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.811 [2024-04-26 15:48:42.462229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.811 [2024-04-26 15:48:42.462244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:80840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.811 [2024-04-26 15:48:42.462272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.811 [2024-04-26 15:48:42.462288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:80848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.811 [2024-04-26 15:48:42.462313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.811 [2024-04-26 15:48:42.462333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:80856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.811 [2024-04-26 15:48:42.462351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.811 [2024-04-26 15:48:42.462367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:80864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.811 [2024-04-26 15:48:42.462388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.811 [2024-04-26 15:48:42.462404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:80872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.811 [2024-04-26 15:48:42.462422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.811 [2024-04-26 15:48:42.462438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:80880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.811 [2024-04-26 15:48:42.462459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.811 [2024-04-26 15:48:42.462474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:80888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.811 [2024-04-26 15:48:42.462502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.811 [2024-04-26 15:48:42.462538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:81472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.811 [2024-04-26 15:48:42.462554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.811 [2024-04-26 15:48:42.462577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:81480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.811 [2024-04-26 15:48:42.462591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.811 [2024-04-26 15:48:42.462607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:81488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.811 [2024-04-26 15:48:42.462620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.811 [2024-04-26 15:48:42.462635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:81496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.811 [2024-04-26 15:48:42.462648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.811 [2024-04-26 15:48:42.462663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:81504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.811 [2024-04-26 15:48:42.462677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.811 [2024-04-26 15:48:42.462692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:81512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.811 [2024-04-26 15:48:42.462705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.811 [2024-04-26 15:48:42.462720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:81520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.811 [2024-04-26 15:48:42.462734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.811 [2024-04-26 15:48:42.462757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:81528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.811 [2024-04-26 15:48:42.462771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.811 [2024-04-26 15:48:42.462786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:81536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:32.811 [2024-04-26 15:48:42.462800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.811 [2024-04-26 15:48:42.462815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:80896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.811 [2024-04-26 15:48:42.462828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.811 [2024-04-26 15:48:42.462844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:80904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.811 [2024-04-26 15:48:42.462857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.811 [2024-04-26 15:48:42.462872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:80912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.811 [2024-04-26 15:48:42.462886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.811 [2024-04-26 15:48:42.462901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:80920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.811 [2024-04-26 15:48:42.462915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.811 [2024-04-26 15:48:42.462936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.811 [2024-04-26 15:48:42.462950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.811 [2024-04-26 15:48:42.462965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:80936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.811 [2024-04-26 15:48:42.462984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.811 [2024-04-26 15:48:42.463000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:80944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.811 [2024-04-26 15:48:42.463014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.811 [2024-04-26 15:48:42.463029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:80952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.811 [2024-04-26 15:48:42.463042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.811 [2024-04-26 15:48:42.463057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:80960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.811 [2024-04-26 15:48:42.463071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.811 [2024-04-26 15:48:42.463086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:80968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.811 [2024-04-26 15:48:42.463100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.811 [2024-04-26 15:48:42.463115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:80976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.811 [2024-04-26 15:48:42.463128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.811 [2024-04-26 15:48:42.463171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:80984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.811 [2024-04-26 15:48:42.463188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.811 [2024-04-26 15:48:42.463203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:80992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.811 [2024-04-26 15:48:42.463217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.811 [2024-04-26 15:48:42.463232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:81000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.811 [2024-04-26 15:48:42.463245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.811 [2024-04-26 15:48:42.463265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:81008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.811 [2024-04-26 15:48:42.463279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.811 [2024-04-26 15:48:42.463294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:81016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.811 [2024-04-26 15:48:42.463307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.811 [2024-04-26 15:48:42.463322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:81024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.811 [2024-04-26 15:48:42.463344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.811 [2024-04-26 15:48:42.463361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:81032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.811 [2024-04-26 15:48:42.463374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.811 [2024-04-26 15:48:42.463389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:81040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.811 [2024-04-26 15:48:42.463402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.812 [2024-04-26 15:48:42.463417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.812 [2024-04-26 15:48:42.463431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.812 [2024-04-26 15:48:42.463446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:81056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.812 [2024-04-26 15:48:42.463460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.812 [2024-04-26 15:48:42.463474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:81064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.812 [2024-04-26 15:48:42.463492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.812 [2024-04-26 15:48:42.463508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:81072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:32.812 [2024-04-26 15:48:42.463522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.812 [2024-04-26 15:48:42.463536] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d94980 is same with the state(5) to be set 00:30:32.812 [2024-04-26 15:48:42.463553] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:32.812 [2024-04-26 15:48:42.463564] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:32.812 [2024-04-26 15:48:42.463574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81080 len:8 PRP1 0x0 PRP2 0x0 00:30:32.812 [2024-04-26 15:48:42.463588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:32.812 [2024-04-26 15:48:42.463646] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d94980 was disconnected and freed. reset controller. 00:30:32.812 [2024-04-26 15:48:42.465133] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:32.812 [2024-04-26 15:48:42.465239] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d8c590 (9): Bad file descriptor 00:30:32.812 [2024-04-26 15:48:42.465391] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.812 [2024-04-26 15:48:42.465449] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:32.812 [2024-04-26 15:48:42.465475] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d8c590 with addr=10.0.0.2, port=4421 00:30:32.812 [2024-04-26 15:48:42.465491] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d8c590 is same with the state(5) to be set 00:30:32.812 [2024-04-26 15:48:42.465515] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d8c590 (9): Bad file descriptor 00:30:32.812 [2024-04-26 15:48:42.465537] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:32.812 [2024-04-26 15:48:42.465565] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:32.812 [2024-04-26 15:48:42.465580] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:32.812 [2024-04-26 15:48:42.465605] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:32.812 [2024-04-26 15:48:42.465619] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:32.812 [2024-04-26 15:48:52.563462] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:30:32.812 Received shutdown signal, test time was about 55.458021 seconds 00:30:32.812 00:30:32.812 Latency(us) 00:30:32.812 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:32.812 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:30:32.812 Verification LBA range: start 0x0 length 0x4000 00:30:32.812 Nvme0n1 : 55.46 7408.56 28.94 0.00 0.00 17247.49 237.38 7046430.72 00:30:32.812 =================================================================================================================== 00:30:32.812 Total : 7408.56 28.94 0.00 0.00 17247.49 237.38 7046430.72 00:30:32.812 15:49:02 -- host/multipath.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:33.070 15:49:03 -- host/multipath.sh@122 -- # trap - SIGINT SIGTERM EXIT 00:30:33.070 15:49:03 -- host/multipath.sh@124 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/try.txt 00:30:33.070 15:49:03 -- host/multipath.sh@125 -- # nvmftestfini 00:30:33.070 15:49:03 -- nvmf/common.sh@477 -- # nvmfcleanup 00:30:33.070 15:49:03 -- nvmf/common.sh@117 -- # sync 00:30:33.070 15:49:03 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:33.070 15:49:03 -- nvmf/common.sh@120 -- # set +e 00:30:33.070 15:49:03 -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:33.070 15:49:03 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:33.070 rmmod nvme_tcp 00:30:33.070 rmmod nvme_fabrics 00:30:33.070 rmmod nvme_keyring 00:30:33.070 15:49:03 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:33.070 15:49:03 -- nvmf/common.sh@124 -- # set -e 00:30:33.070 15:49:03 -- nvmf/common.sh@125 -- # return 0 00:30:33.070 15:49:03 -- nvmf/common.sh@478 -- # '[' -n 87074 ']' 00:30:33.070 15:49:03 -- nvmf/common.sh@479 -- # killprocess 87074 00:30:33.070 15:49:03 -- common/autotest_common.sh@936 -- # '[' -z 87074 ']' 00:30:33.070 15:49:03 -- common/autotest_common.sh@940 -- # kill -0 87074 00:30:33.070 15:49:03 -- common/autotest_common.sh@941 -- # uname 00:30:33.070 15:49:03 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:33.070 15:49:03 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 87074 00:30:33.328 15:49:03 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:30:33.328 15:49:03 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:30:33.328 killing process with pid 87074 00:30:33.328 15:49:03 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 87074' 00:30:33.328 15:49:03 -- common/autotest_common.sh@955 -- # kill 87074 00:30:33.328 15:49:03 -- common/autotest_common.sh@960 -- # wait 87074 00:30:33.586 15:49:03 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:30:33.586 15:49:03 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:30:33.586 15:49:03 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:30:33.586 15:49:03 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:33.586 15:49:03 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:33.586 15:49:03 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:33.586 15:49:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:33.586 15:49:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:33.586 15:49:03 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:30:33.586 00:30:33.586 real 1m2.013s 00:30:33.586 user 2m54.423s 00:30:33.586 sys 0m14.621s 00:30:33.586 15:49:03 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:30:33.586 ************************************ 00:30:33.586 END TEST nvmf_multipath 00:30:33.586 15:49:03 -- common/autotest_common.sh@10 -- # set +x 00:30:33.586 ************************************ 00:30:33.586 15:49:03 -- nvmf/nvmf.sh@115 -- # run_test nvmf_timeout /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:30:33.586 15:49:03 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:30:33.586 15:49:03 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:30:33.586 15:49:03 -- common/autotest_common.sh@10 -- # set +x 00:30:33.586 ************************************ 00:30:33.586 START TEST nvmf_timeout 00:30:33.586 ************************************ 00:30:33.586 15:49:03 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/host/timeout.sh --transport=tcp 00:30:33.844 * Looking for test storage... 00:30:33.844 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/host 00:30:33.844 15:49:03 -- host/timeout.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:30:33.844 15:49:03 -- nvmf/common.sh@7 -- # uname -s 00:30:33.844 15:49:03 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:33.844 15:49:03 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:33.844 15:49:03 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:33.844 15:49:03 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:33.844 15:49:03 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:33.844 15:49:03 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:33.844 15:49:03 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:33.844 15:49:03 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:33.844 15:49:03 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:33.844 15:49:03 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:33.844 15:49:03 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:30:33.844 15:49:03 -- nvmf/common.sh@18 -- # NVME_HOSTID=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:30:33.844 15:49:03 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:33.844 15:49:03 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:33.844 15:49:03 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:30:33.844 15:49:03 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:33.844 15:49:03 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:30:33.844 15:49:03 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:33.844 15:49:03 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:33.844 15:49:03 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:33.844 15:49:03 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.844 15:49:03 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.844 15:49:03 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.844 15:49:03 -- paths/export.sh@5 -- # export PATH 00:30:33.844 15:49:03 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.844 15:49:03 -- nvmf/common.sh@47 -- # : 0 00:30:33.844 15:49:03 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:33.844 15:49:03 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:33.844 15:49:03 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:33.844 15:49:03 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:33.844 15:49:03 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:33.844 15:49:03 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:33.844 15:49:03 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:33.844 15:49:03 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:33.844 15:49:03 -- host/timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:33.844 15:49:03 -- host/timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:33.844 15:49:03 -- host/timeout.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:33.844 15:49:03 -- host/timeout.sh@15 -- # bpf_sh=/home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 00:30:33.844 15:49:03 -- host/timeout.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:33.844 15:49:03 -- host/timeout.sh@19 -- # nvmftestinit 00:30:33.844 15:49:03 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:30:33.845 15:49:03 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:33.845 15:49:03 -- nvmf/common.sh@437 -- # prepare_net_devs 00:30:33.845 15:49:03 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:30:33.845 15:49:03 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:30:33.845 15:49:03 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:33.845 15:49:03 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:33.845 15:49:03 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:33.845 15:49:03 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:30:33.845 15:49:03 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:30:33.845 15:49:03 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:30:33.845 15:49:03 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:30:33.845 15:49:03 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:30:33.845 15:49:03 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:30:33.845 15:49:03 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:33.845 15:49:03 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:33.845 15:49:03 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:30:33.845 15:49:03 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:30:33.845 15:49:03 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:30:33.845 15:49:03 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:30:33.845 15:49:03 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:30:33.845 15:49:03 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:33.845 15:49:03 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:30:33.845 15:49:03 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:30:33.845 15:49:03 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:30:33.845 15:49:03 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:30:33.845 15:49:03 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:30:33.845 15:49:03 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:30:33.845 Cannot find device "nvmf_tgt_br" 00:30:33.845 15:49:03 -- nvmf/common.sh@155 -- # true 00:30:33.845 15:49:03 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:30:33.845 Cannot find device "nvmf_tgt_br2" 00:30:33.845 15:49:03 -- nvmf/common.sh@156 -- # true 00:30:33.845 15:49:03 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:30:33.845 15:49:03 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:30:33.845 Cannot find device "nvmf_tgt_br" 00:30:33.845 15:49:03 -- nvmf/common.sh@158 -- # true 00:30:33.845 15:49:03 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:30:33.845 Cannot find device "nvmf_tgt_br2" 00:30:33.845 15:49:03 -- nvmf/common.sh@159 -- # true 00:30:33.845 15:49:03 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:30:33.845 15:49:04 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:30:33.845 15:49:04 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:30:33.845 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:33.845 15:49:04 -- nvmf/common.sh@162 -- # true 00:30:33.845 15:49:04 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:30:33.845 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:30:33.845 15:49:04 -- nvmf/common.sh@163 -- # true 00:30:33.845 15:49:04 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:30:33.845 15:49:04 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:30:33.845 15:49:04 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:30:33.845 15:49:04 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:30:33.845 15:49:04 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:30:33.845 15:49:04 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:30:33.845 15:49:04 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:30:33.845 15:49:04 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:30:33.845 15:49:04 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:30:33.845 15:49:04 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:30:33.845 15:49:04 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:30:33.845 15:49:04 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:30:33.845 15:49:04 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:30:33.845 15:49:04 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:30:34.102 15:49:04 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:30:34.102 15:49:04 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:30:34.102 15:49:04 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:30:34.102 15:49:04 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:30:34.102 15:49:04 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:30:34.102 15:49:04 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:30:34.102 15:49:04 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:30:34.102 15:49:04 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:30:34.102 15:49:04 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:30:34.102 15:49:04 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:30:34.102 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:34.102 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.061 ms 00:30:34.102 00:30:34.102 --- 10.0.0.2 ping statistics --- 00:30:34.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:34.102 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:30:34.102 15:49:04 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:30:34.102 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:30:34.102 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.032 ms 00:30:34.102 00:30:34.102 --- 10.0.0.3 ping statistics --- 00:30:34.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:34.102 rtt min/avg/max/mdev = 0.032/0.032/0.032/0.000 ms 00:30:34.102 15:49:04 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:30:34.102 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:34.102 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.026 ms 00:30:34.102 00:30:34.102 --- 10.0.0.1 ping statistics --- 00:30:34.102 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:34.102 rtt min/avg/max/mdev = 0.026/0.026/0.026/0.000 ms 00:30:34.102 15:49:04 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:34.102 15:49:04 -- nvmf/common.sh@422 -- # return 0 00:30:34.102 15:49:04 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:30:34.102 15:49:04 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:34.102 15:49:04 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:30:34.102 15:49:04 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:30:34.102 15:49:04 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:34.102 15:49:04 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:30:34.102 15:49:04 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:30:34.102 15:49:04 -- host/timeout.sh@21 -- # nvmfappstart -m 0x3 00:30:34.102 15:49:04 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:30:34.102 15:49:04 -- common/autotest_common.sh@710 -- # xtrace_disable 00:30:34.102 15:49:04 -- common/autotest_common.sh@10 -- # set +x 00:30:34.102 15:49:04 -- nvmf/common.sh@470 -- # nvmfpid=88463 00:30:34.102 15:49:04 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:30:34.102 15:49:04 -- nvmf/common.sh@471 -- # waitforlisten 88463 00:30:34.102 15:49:04 -- common/autotest_common.sh@817 -- # '[' -z 88463 ']' 00:30:34.102 15:49:04 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:34.102 15:49:04 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:34.102 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:34.102 15:49:04 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:34.102 15:49:04 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:34.102 15:49:04 -- common/autotest_common.sh@10 -- # set +x 00:30:34.102 [2024-04-26 15:49:04.319171] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:30:34.102 [2024-04-26 15:49:04.319267] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:34.362 [2024-04-26 15:49:04.456953] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:30:34.362 [2024-04-26 15:49:04.569580] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:34.362 [2024-04-26 15:49:04.569672] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:34.362 [2024-04-26 15:49:04.569684] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:34.362 [2024-04-26 15:49:04.569693] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:34.362 [2024-04-26 15:49:04.569700] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:34.362 [2024-04-26 15:49:04.569857] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:30:34.362 [2024-04-26 15:49:04.569866] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:30:35.297 15:49:05 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:35.297 15:49:05 -- common/autotest_common.sh@850 -- # return 0 00:30:35.297 15:49:05 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:30:35.297 15:49:05 -- common/autotest_common.sh@716 -- # xtrace_disable 00:30:35.297 15:49:05 -- common/autotest_common.sh@10 -- # set +x 00:30:35.297 15:49:05 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:35.297 15:49:05 -- host/timeout.sh@23 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid || :; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:35.297 15:49:05 -- host/timeout.sh@25 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:35.556 [2024-04-26 15:49:05.676457] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:35.556 15:49:05 -- host/timeout.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:35.814 Malloc0 00:30:35.814 15:49:05 -- host/timeout.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:36.072 15:49:06 -- host/timeout.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:36.329 15:49:06 -- host/timeout.sh@29 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:36.588 [2024-04-26 15:49:06.719642] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:36.588 15:49:06 -- host/timeout.sh@32 -- # bdevperf_pid=88554 00:30:36.588 15:49:06 -- host/timeout.sh@31 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:30:36.588 15:49:06 -- host/timeout.sh@34 -- # waitforlisten 88554 /var/tmp/bdevperf.sock 00:30:36.588 15:49:06 -- common/autotest_common.sh@817 -- # '[' -z 88554 ']' 00:30:36.588 15:49:06 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:36.588 15:49:06 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:36.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:36.588 15:49:06 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:36.588 15:49:06 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:36.588 15:49:06 -- common/autotest_common.sh@10 -- # set +x 00:30:36.588 [2024-04-26 15:49:06.791566] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:30:36.588 [2024-04-26 15:49:06.791652] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88554 ] 00:30:36.845 [2024-04-26 15:49:06.923691] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:36.845 [2024-04-26 15:49:07.051484] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:37.780 15:49:07 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:37.780 15:49:07 -- common/autotest_common.sh@850 -- # return 0 00:30:37.780 15:49:07 -- host/timeout.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:30:38.038 15:49:08 -- host/timeout.sh@46 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:30:38.296 NVMe0n1 00:30:38.296 15:49:08 -- host/timeout.sh@50 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:38.296 15:49:08 -- host/timeout.sh@51 -- # rpc_pid=88602 00:30:38.296 15:49:08 -- host/timeout.sh@53 -- # sleep 1 00:30:38.296 Running I/O for 10 seconds... 00:30:39.230 15:49:09 -- host/timeout.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:39.489 [2024-04-26 15:49:09.777400] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.489 [2024-04-26 15:49:09.777460] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.489 [2024-04-26 15:49:09.777472] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.489 [2024-04-26 15:49:09.777481] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.489 [2024-04-26 15:49:09.777490] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.489 [2024-04-26 15:49:09.777500] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.489 [2024-04-26 15:49:09.777508] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.489 [2024-04-26 15:49:09.777517] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.489 [2024-04-26 15:49:09.777526] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.489 [2024-04-26 15:49:09.777534] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.489 [2024-04-26 15:49:09.777541] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.489 [2024-04-26 15:49:09.777550] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.489 [2024-04-26 15:49:09.777558] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.489 [2024-04-26 15:49:09.777567] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.489 [2024-04-26 15:49:09.777575] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.489 [2024-04-26 15:49:09.777583] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.489 [2024-04-26 15:49:09.777591] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.489 [2024-04-26 15:49:09.777599] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.489 [2024-04-26 15:49:09.777608] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.489 [2024-04-26 15:49:09.777616] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.489 [2024-04-26 15:49:09.777624] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.489 [2024-04-26 15:49:09.777633] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.489 [2024-04-26 15:49:09.777641] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.489 [2024-04-26 15:49:09.777649] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.489 [2024-04-26 15:49:09.777659] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.489 [2024-04-26 15:49:09.777667] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.489 [2024-04-26 15:49:09.777675] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.489 [2024-04-26 15:49:09.777684] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.489 [2024-04-26 15:49:09.777692] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.489 [2024-04-26 15:49:09.777700] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.490 [2024-04-26 15:49:09.777709] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.490 [2024-04-26 15:49:09.777717] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.490 [2024-04-26 15:49:09.777727] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.490 [2024-04-26 15:49:09.777735] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.490 [2024-04-26 15:49:09.777744] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.490 [2024-04-26 15:49:09.777753] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.490 [2024-04-26 15:49:09.777761] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.490 [2024-04-26 15:49:09.777770] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.490 [2024-04-26 15:49:09.777778] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.490 [2024-04-26 15:49:09.777786] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.490 [2024-04-26 15:49:09.777794] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.490 [2024-04-26 15:49:09.777802] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.490 [2024-04-26 15:49:09.777811] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.490 [2024-04-26 15:49:09.777819] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.490 [2024-04-26 15:49:09.777827] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.490 [2024-04-26 15:49:09.777835] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.490 [2024-04-26 15:49:09.777843] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.490 [2024-04-26 15:49:09.777851] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.490 [2024-04-26 15:49:09.777859] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.490 [2024-04-26 15:49:09.777867] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.490 [2024-04-26 15:49:09.777876] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.490 [2024-04-26 15:49:09.777883] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.490 [2024-04-26 15:49:09.777892] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.490 [2024-04-26 15:49:09.777900] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.490 [2024-04-26 15:49:09.777908] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.490 [2024-04-26 15:49:09.777916] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.490 [2024-04-26 15:49:09.777924] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.490 [2024-04-26 15:49:09.777932] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.490 [2024-04-26 15:49:09.777940] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.490 [2024-04-26 15:49:09.777948] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.490 [2024-04-26 15:49:09.777957] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.490 [2024-04-26 15:49:09.777965] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.490 [2024-04-26 15:49:09.777973] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.490 [2024-04-26 15:49:09.777991] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.490 [2024-04-26 15:49:09.777999] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.490 [2024-04-26 15:49:09.778007] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.490 [2024-04-26 15:49:09.778016] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.490 [2024-04-26 15:49:09.778024] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.490 [2024-04-26 15:49:09.778032] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.490 [2024-04-26 15:49:09.778040] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.490 [2024-04-26 15:49:09.778048] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.490 [2024-04-26 15:49:09.778057] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.490 [2024-04-26 15:49:09.778065] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.490 [2024-04-26 15:49:09.778080] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.490 [2024-04-26 15:49:09.778089] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.490 [2024-04-26 15:49:09.778097] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.490 [2024-04-26 15:49:09.778105] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.490 [2024-04-26 15:49:09.778114] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x219cd00 is same with the state(5) to be set 00:30:39.490 [2024-04-26 15:49:09.778921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:89696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.490 [2024-04-26 15:49:09.778964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.490 [2024-04-26 15:49:09.778988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:89704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.490 [2024-04-26 15:49:09.779000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.490 [2024-04-26 15:49:09.779013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:89712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.490 [2024-04-26 15:49:09.779022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.490 [2024-04-26 15:49:09.779039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:89720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.490 [2024-04-26 15:49:09.779057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.490 [2024-04-26 15:49:09.779074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:89728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.490 [2024-04-26 15:49:09.779085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.490 [2024-04-26 15:49:09.779097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:89736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.490 [2024-04-26 15:49:09.779107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.490 [2024-04-26 15:49:09.779118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:89744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.490 [2024-04-26 15:49:09.779128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.490 [2024-04-26 15:49:09.779154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:89752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.490 [2024-04-26 15:49:09.779166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.490 [2024-04-26 15:49:09.779178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:89760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.490 [2024-04-26 15:49:09.779188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.490 [2024-04-26 15:49:09.779199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:89768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.490 [2024-04-26 15:49:09.779218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.490 [2024-04-26 15:49:09.779230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:89776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.490 [2024-04-26 15:49:09.779240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.490 [2024-04-26 15:49:09.779251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:89784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.490 [2024-04-26 15:49:09.779261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.490 [2024-04-26 15:49:09.779273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:89792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.490 [2024-04-26 15:49:09.779282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.490 [2024-04-26 15:49:09.779294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:89800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.490 [2024-04-26 15:49:09.779303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.490 [2024-04-26 15:49:09.779315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:89808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.490 [2024-04-26 15:49:09.779325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.490 [2024-04-26 15:49:09.779337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:89816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.490 [2024-04-26 15:49:09.779347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.491 [2024-04-26 15:49:09.779358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:89824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.491 [2024-04-26 15:49:09.779370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.491 [2024-04-26 15:49:09.779382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:89832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.491 [2024-04-26 15:49:09.779392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.491 [2024-04-26 15:49:09.779403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:89840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.491 [2024-04-26 15:49:09.779413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.491 [2024-04-26 15:49:09.779424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:89848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.491 [2024-04-26 15:49:09.779434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.491 [2024-04-26 15:49:09.779446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:89856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.491 [2024-04-26 15:49:09.779456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.491 [2024-04-26 15:49:09.779467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:89864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.491 [2024-04-26 15:49:09.779477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.491 [2024-04-26 15:49:09.779489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:89872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.491 [2024-04-26 15:49:09.779498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.491 [2024-04-26 15:49:09.779510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:89880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.491 [2024-04-26 15:49:09.779520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.491 [2024-04-26 15:49:09.779531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:89888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.491 [2024-04-26 15:49:09.779541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.491 [2024-04-26 15:49:09.779552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:89896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.491 [2024-04-26 15:49:09.779562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.491 [2024-04-26 15:49:09.779573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:89904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.491 [2024-04-26 15:49:09.779583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.491 [2024-04-26 15:49:09.779595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:89912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.491 [2024-04-26 15:49:09.779605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.491 [2024-04-26 15:49:09.779616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:89920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.491 [2024-04-26 15:49:09.779626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.491 [2024-04-26 15:49:09.779637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:89928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.491 [2024-04-26 15:49:09.779647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.491 [2024-04-26 15:49:09.779658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:89936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.491 [2024-04-26 15:49:09.779668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.491 [2024-04-26 15:49:09.779680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:89944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.491 [2024-04-26 15:49:09.779690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.491 [2024-04-26 15:49:09.779701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:89952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.491 [2024-04-26 15:49:09.779720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.491 [2024-04-26 15:49:09.779732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:89960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.491 [2024-04-26 15:49:09.779742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.491 [2024-04-26 15:49:09.779753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:89968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.491 [2024-04-26 15:49:09.779762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.491 [2024-04-26 15:49:09.779774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:89976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.491 [2024-04-26 15:49:09.779784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.491 [2024-04-26 15:49:09.779796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:89984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.491 [2024-04-26 15:49:09.779805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.491 [2024-04-26 15:49:09.779817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:89992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.491 [2024-04-26 15:49:09.779826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.491 [2024-04-26 15:49:09.779838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:90000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.491 [2024-04-26 15:49:09.779847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.491 [2024-04-26 15:49:09.779859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:90008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.491 [2024-04-26 15:49:09.779868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.491 [2024-04-26 15:49:09.779879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:90016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.491 [2024-04-26 15:49:09.779889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.491 [2024-04-26 15:49:09.779901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:90024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.491 [2024-04-26 15:49:09.779911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.491 [2024-04-26 15:49:09.779922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:90032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.491 [2024-04-26 15:49:09.779932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.491 [2024-04-26 15:49:09.779943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:90040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.491 [2024-04-26 15:49:09.779953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.491 [2024-04-26 15:49:09.779964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:90048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.491 [2024-04-26 15:49:09.779974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.491 [2024-04-26 15:49:09.779986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:90056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.491 [2024-04-26 15:49:09.779995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.491 [2024-04-26 15:49:09.780007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:90064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.491 [2024-04-26 15:49:09.780016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.491 [2024-04-26 15:49:09.780029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:90072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:39.491 [2024-04-26 15:49:09.780046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.491 [2024-04-26 15:49:09.780063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:90088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.491 [2024-04-26 15:49:09.780080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.491 [2024-04-26 15:49:09.780092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:90096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.491 [2024-04-26 15:49:09.780102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.491 [2024-04-26 15:49:09.780113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:90104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.491 [2024-04-26 15:49:09.780123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.491 [2024-04-26 15:49:09.780145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:90112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.491 [2024-04-26 15:49:09.780158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.491 [2024-04-26 15:49:09.780170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:90120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.491 [2024-04-26 15:49:09.780180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.491 [2024-04-26 15:49:09.780192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:90128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.491 [2024-04-26 15:49:09.780202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.492 [2024-04-26 15:49:09.780213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:90136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.492 [2024-04-26 15:49:09.780224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.492 [2024-04-26 15:49:09.780235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:90144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.492 [2024-04-26 15:49:09.780245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.492 [2024-04-26 15:49:09.780257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:90152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.492 [2024-04-26 15:49:09.780266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.492 [2024-04-26 15:49:09.780278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:90160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.492 [2024-04-26 15:49:09.780288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.492 [2024-04-26 15:49:09.780299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:90168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.492 [2024-04-26 15:49:09.780308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.492 [2024-04-26 15:49:09.780320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:90176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.492 [2024-04-26 15:49:09.780330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.492 [2024-04-26 15:49:09.780352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:90184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.492 [2024-04-26 15:49:09.780364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.492 [2024-04-26 15:49:09.780376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:90192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.492 [2024-04-26 15:49:09.780386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.492 [2024-04-26 15:49:09.780397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:90200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.492 [2024-04-26 15:49:09.780407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.492 [2024-04-26 15:49:09.780418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:90208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.492 [2024-04-26 15:49:09.780428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.492 [2024-04-26 15:49:09.780439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:90216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.492 [2024-04-26 15:49:09.780455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.492 [2024-04-26 15:49:09.780466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:90224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.492 [2024-04-26 15:49:09.780476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.492 [2024-04-26 15:49:09.780487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:90232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.492 [2024-04-26 15:49:09.780497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.492 [2024-04-26 15:49:09.780509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:90240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.492 [2024-04-26 15:49:09.780519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.492 [2024-04-26 15:49:09.780530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:90248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.492 [2024-04-26 15:49:09.780540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.492 [2024-04-26 15:49:09.780551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:90256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.492 [2024-04-26 15:49:09.780561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.492 [2024-04-26 15:49:09.780572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:90264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.492 [2024-04-26 15:49:09.780582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.492 [2024-04-26 15:49:09.780593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:90272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.492 [2024-04-26 15:49:09.780603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.492 [2024-04-26 15:49:09.780614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:90280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.492 [2024-04-26 15:49:09.780624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.492 [2024-04-26 15:49:09.780635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:90288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.492 [2024-04-26 15:49:09.780645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.492 [2024-04-26 15:49:09.780656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:90296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.492 [2024-04-26 15:49:09.780666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.492 [2024-04-26 15:49:09.780677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:90304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.492 [2024-04-26 15:49:09.780687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.492 [2024-04-26 15:49:09.780698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:90312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.492 [2024-04-26 15:49:09.780707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.492 [2024-04-26 15:49:09.780719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:90320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.492 [2024-04-26 15:49:09.780728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.492 [2024-04-26 15:49:09.780739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:90328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.492 [2024-04-26 15:49:09.780749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.492 [2024-04-26 15:49:09.780760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:90336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.492 [2024-04-26 15:49:09.780770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.492 [2024-04-26 15:49:09.780781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:90344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.752 [2024-04-26 15:49:09.780796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.752 [2024-04-26 15:49:09.780807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:90352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.753 [2024-04-26 15:49:09.780817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.753 [2024-04-26 15:49:09.780836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:90360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.753 [2024-04-26 15:49:09.780846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.753 [2024-04-26 15:49:09.780857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:90368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.753 [2024-04-26 15:49:09.780870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.753 [2024-04-26 15:49:09.780882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:90376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.753 [2024-04-26 15:49:09.780891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.753 [2024-04-26 15:49:09.780903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:90384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.753 [2024-04-26 15:49:09.780913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.753 [2024-04-26 15:49:09.780924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:90392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.753 [2024-04-26 15:49:09.780933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.753 [2024-04-26 15:49:09.780945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:90400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.753 [2024-04-26 15:49:09.780955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.753 [2024-04-26 15:49:09.780966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:90408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.753 [2024-04-26 15:49:09.780976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.753 [2024-04-26 15:49:09.780988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:90416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.753 [2024-04-26 15:49:09.780997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.753 [2024-04-26 15:49:09.781009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:90424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.753 [2024-04-26 15:49:09.781018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.753 [2024-04-26 15:49:09.781033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:90432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.753 [2024-04-26 15:49:09.781049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.753 [2024-04-26 15:49:09.781068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:90440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.753 [2024-04-26 15:49:09.781080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.753 [2024-04-26 15:49:09.781092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:90448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.753 [2024-04-26 15:49:09.781101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.753 [2024-04-26 15:49:09.781112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:90456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.753 [2024-04-26 15:49:09.781122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.753 [2024-04-26 15:49:09.781133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:90464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.753 [2024-04-26 15:49:09.781155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.753 [2024-04-26 15:49:09.781168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:90472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.753 [2024-04-26 15:49:09.781183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.753 [2024-04-26 15:49:09.781194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:90480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.753 [2024-04-26 15:49:09.781204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.753 [2024-04-26 15:49:09.781221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:90488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.753 [2024-04-26 15:49:09.781231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.753 [2024-04-26 15:49:09.781242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:90496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.753 [2024-04-26 15:49:09.781252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.753 [2024-04-26 15:49:09.781263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:90504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.753 [2024-04-26 15:49:09.781273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.753 [2024-04-26 15:49:09.781284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:90512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.753 [2024-04-26 15:49:09.781293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.753 [2024-04-26 15:49:09.781305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:90520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.753 [2024-04-26 15:49:09.781315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.753 [2024-04-26 15:49:09.781327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:90528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.753 [2024-04-26 15:49:09.781336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.753 [2024-04-26 15:49:09.781348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:90536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.753 [2024-04-26 15:49:09.781358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.753 [2024-04-26 15:49:09.781369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:90544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.753 [2024-04-26 15:49:09.781379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.753 [2024-04-26 15:49:09.781390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:90552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.753 [2024-04-26 15:49:09.781400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.753 [2024-04-26 15:49:09.781411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:90560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.753 [2024-04-26 15:49:09.781421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.753 [2024-04-26 15:49:09.781432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:90568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.753 [2024-04-26 15:49:09.781442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.753 [2024-04-26 15:49:09.781453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:90576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.753 [2024-04-26 15:49:09.781463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.753 [2024-04-26 15:49:09.781474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:90584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.753 [2024-04-26 15:49:09.781484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.753 [2024-04-26 15:49:09.781495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:90592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.753 [2024-04-26 15:49:09.781505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.753 [2024-04-26 15:49:09.781516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:90600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.753 [2024-04-26 15:49:09.781530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.753 [2024-04-26 15:49:09.781542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:90608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.753 [2024-04-26 15:49:09.781551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.753 [2024-04-26 15:49:09.781567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:90616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.753 [2024-04-26 15:49:09.781577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.753 [2024-04-26 15:49:09.781589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:90624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.753 [2024-04-26 15:49:09.781599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.753 [2024-04-26 15:49:09.781611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:90632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.753 [2024-04-26 15:49:09.781621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.753 [2024-04-26 15:49:09.781632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:90640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.753 [2024-04-26 15:49:09.781642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.753 [2024-04-26 15:49:09.781653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:90648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.753 [2024-04-26 15:49:09.781663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.753 [2024-04-26 15:49:09.781675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:90656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.754 [2024-04-26 15:49:09.781684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.754 [2024-04-26 15:49:09.781696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:90664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.754 [2024-04-26 15:49:09.781706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.754 [2024-04-26 15:49:09.781717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:90672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.754 [2024-04-26 15:49:09.781727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.754 [2024-04-26 15:49:09.781739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:90680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.754 [2024-04-26 15:49:09.781748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.754 [2024-04-26 15:49:09.781759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:90688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.754 [2024-04-26 15:49:09.781769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.754 [2024-04-26 15:49:09.781780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:90696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.754 [2024-04-26 15:49:09.781790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.754 [2024-04-26 15:49:09.781801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:90704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.754 [2024-04-26 15:49:09.781811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.754 [2024-04-26 15:49:09.781823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:90712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:39.754 [2024-04-26 15:49:09.781833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.754 [2024-04-26 15:49:09.781859] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:39.754 [2024-04-26 15:49:09.781869] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:39.754 [2024-04-26 15:49:09.781877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:90080 len:8 PRP1 0x0 PRP2 0x0 00:30:39.754 [2024-04-26 15:49:09.781892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:39.754 [2024-04-26 15:49:09.781947] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x190ac10 was disconnected and freed. reset controller. 00:30:39.754 [2024-04-26 15:49:09.782192] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:39.754 [2024-04-26 15:49:09.782277] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189bdc0 (9): Bad file descriptor 00:30:39.754 [2024-04-26 15:49:09.782383] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.754 [2024-04-26 15:49:09.782441] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:39.754 [2024-04-26 15:49:09.782465] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189bdc0 with addr=10.0.0.2, port=4420 00:30:39.754 [2024-04-26 15:49:09.782477] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189bdc0 is same with the state(5) to be set 00:30:39.754 [2024-04-26 15:49:09.782496] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189bdc0 (9): Bad file descriptor 00:30:39.754 [2024-04-26 15:49:09.782513] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:39.754 [2024-04-26 15:49:09.782523] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:39.754 [2024-04-26 15:49:09.782533] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:39.754 [2024-04-26 15:49:09.782554] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:39.754 [2024-04-26 15:49:09.782566] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:39.754 15:49:09 -- host/timeout.sh@56 -- # sleep 2 00:30:41.653 [2024-04-26 15:49:11.782774] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.653 [2024-04-26 15:49:11.782884] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:41.653 [2024-04-26 15:49:11.782905] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189bdc0 with addr=10.0.0.2, port=4420 00:30:41.653 [2024-04-26 15:49:11.782921] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189bdc0 is same with the state(5) to be set 00:30:41.653 [2024-04-26 15:49:11.782952] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189bdc0 (9): Bad file descriptor 00:30:41.653 [2024-04-26 15:49:11.782973] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:41.653 [2024-04-26 15:49:11.782984] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:41.653 [2024-04-26 15:49:11.782995] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:41.653 [2024-04-26 15:49:11.783024] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:41.653 [2024-04-26 15:49:11.783038] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:41.653 15:49:11 -- host/timeout.sh@57 -- # get_controller 00:30:41.653 15:49:11 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:41.653 15:49:11 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:30:41.910 15:49:12 -- host/timeout.sh@57 -- # [[ NVMe0 == \N\V\M\e\0 ]] 00:30:41.910 15:49:12 -- host/timeout.sh@58 -- # get_bdev 00:30:41.910 15:49:12 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:30:41.910 15:49:12 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:30:42.168 15:49:12 -- host/timeout.sh@58 -- # [[ NVMe0n1 == \N\V\M\e\0\n\1 ]] 00:30:42.168 15:49:12 -- host/timeout.sh@61 -- # sleep 5 00:30:43.542 [2024-04-26 15:49:13.783208] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.542 [2024-04-26 15:49:13.783313] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:43.542 [2024-04-26 15:49:13.783334] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x189bdc0 with addr=10.0.0.2, port=4420 00:30:43.542 [2024-04-26 15:49:13.783350] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x189bdc0 is same with the state(5) to be set 00:30:43.542 [2024-04-26 15:49:13.783379] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189bdc0 (9): Bad file descriptor 00:30:43.542 [2024-04-26 15:49:13.783399] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:43.542 [2024-04-26 15:49:13.783410] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:43.542 [2024-04-26 15:49:13.783422] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:43.542 [2024-04-26 15:49:13.783451] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:43.542 [2024-04-26 15:49:13.783464] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:46.069 [2024-04-26 15:49:15.783678] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:46.635 00:30:46.635 Latency(us) 00:30:46.635 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:46.635 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:30:46.635 Verification LBA range: start 0x0 length 0x4000 00:30:46.635 NVMe0n1 : 8.23 1361.96 5.32 15.55 0.00 92776.75 2234.18 7015926.69 00:30:46.635 =================================================================================================================== 00:30:46.635 Total : 1361.96 5.32 15.55 0.00 92776.75 2234.18 7015926.69 00:30:46.635 0 00:30:47.201 15:49:17 -- host/timeout.sh@62 -- # get_controller 00:30:47.201 15:49:17 -- host/timeout.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:30:47.201 15:49:17 -- host/timeout.sh@41 -- # jq -r '.[].name' 00:30:47.458 15:49:17 -- host/timeout.sh@62 -- # [[ '' == '' ]] 00:30:47.458 15:49:17 -- host/timeout.sh@63 -- # get_bdev 00:30:47.458 15:49:17 -- host/timeout.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs 00:30:47.458 15:49:17 -- host/timeout.sh@37 -- # jq -r '.[].name' 00:30:48.025 15:49:18 -- host/timeout.sh@63 -- # [[ '' == '' ]] 00:30:48.025 15:49:18 -- host/timeout.sh@65 -- # wait 88602 00:30:48.025 15:49:18 -- host/timeout.sh@67 -- # killprocess 88554 00:30:48.025 15:49:18 -- common/autotest_common.sh@936 -- # '[' -z 88554 ']' 00:30:48.025 15:49:18 -- common/autotest_common.sh@940 -- # kill -0 88554 00:30:48.025 15:49:18 -- common/autotest_common.sh@941 -- # uname 00:30:48.025 15:49:18 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:30:48.025 15:49:18 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88554 00:30:48.025 killing process with pid 88554 00:30:48.025 Received shutdown signal, test time was about 9.503495 seconds 00:30:48.025 00:30:48.025 Latency(us) 00:30:48.025 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:48.025 =================================================================================================================== 00:30:48.025 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:48.025 15:49:18 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:30:48.025 15:49:18 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:30:48.025 15:49:18 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88554' 00:30:48.025 15:49:18 -- common/autotest_common.sh@955 -- # kill 88554 00:30:48.025 15:49:18 -- common/autotest_common.sh@960 -- # wait 88554 00:30:48.025 15:49:18 -- host/timeout.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:48.283 [2024-04-26 15:49:18.511849] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:48.283 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:48.283 15:49:18 -- host/timeout.sh@74 -- # bdevperf_pid=88764 00:30:48.283 15:49:18 -- host/timeout.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -f 00:30:48.283 15:49:18 -- host/timeout.sh@76 -- # waitforlisten 88764 /var/tmp/bdevperf.sock 00:30:48.283 15:49:18 -- common/autotest_common.sh@817 -- # '[' -z 88764 ']' 00:30:48.283 15:49:18 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:48.283 15:49:18 -- common/autotest_common.sh@822 -- # local max_retries=100 00:30:48.283 15:49:18 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:48.283 15:49:18 -- common/autotest_common.sh@826 -- # xtrace_disable 00:30:48.283 15:49:18 -- common/autotest_common.sh@10 -- # set +x 00:30:48.542 [2024-04-26 15:49:18.576460] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:30:48.542 [2024-04-26 15:49:18.576568] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88764 ] 00:30:48.542 [2024-04-26 15:49:18.709055] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:48.542 [2024-04-26 15:49:18.825087] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:30:49.477 15:49:19 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:30:49.477 15:49:19 -- common/autotest_common.sh@850 -- # return 0 00:30:49.477 15:49:19 -- host/timeout.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:30:49.735 15:49:19 -- host/timeout.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --fast-io-fail-timeout-sec 2 --reconnect-delay-sec 1 00:30:49.994 NVMe0n1 00:30:49.994 15:49:20 -- host/timeout.sh@84 -- # rpc_pid=88807 00:30:49.994 15:49:20 -- host/timeout.sh@83 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:49.994 15:49:20 -- host/timeout.sh@86 -- # sleep 1 00:30:49.994 Running I/O for 10 seconds... 00:30:50.927 15:49:21 -- host/timeout.sh@87 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:51.188 [2024-04-26 15:49:21.374957] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.188 [2024-04-26 15:49:21.375022] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.188 [2024-04-26 15:49:21.375035] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.188 [2024-04-26 15:49:21.375044] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.188 [2024-04-26 15:49:21.375053] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.188 [2024-04-26 15:49:21.375062] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.188 [2024-04-26 15:49:21.375072] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.188 [2024-04-26 15:49:21.375080] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.188 [2024-04-26 15:49:21.375089] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.188 [2024-04-26 15:49:21.375097] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.188 [2024-04-26 15:49:21.375105] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.188 [2024-04-26 15:49:21.375113] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.188 [2024-04-26 15:49:21.375122] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.188 [2024-04-26 15:49:21.375130] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.188 [2024-04-26 15:49:21.375150] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.188 [2024-04-26 15:49:21.375160] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.188 [2024-04-26 15:49:21.375168] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.188 [2024-04-26 15:49:21.375177] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.188 [2024-04-26 15:49:21.375185] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.188 [2024-04-26 15:49:21.375193] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.188 [2024-04-26 15:49:21.375202] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375210] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375218] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375226] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375235] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375243] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375252] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375260] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375268] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375276] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375284] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375292] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375300] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375308] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375317] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375325] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375333] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375342] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375351] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375360] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375368] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375387] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375395] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375404] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375412] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375420] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375428] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375437] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375445] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375454] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375462] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375470] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375479] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375488] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375502] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375510] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375518] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375527] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375535] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375543] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375551] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375559] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375567] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375576] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375584] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375591] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375599] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375607] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375615] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375624] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375632] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375640] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375649] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375657] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375665] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375674] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375682] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375690] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375698] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375706] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375714] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375722] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375730] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375739] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375746] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375755] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375764] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375772] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375780] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375788] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375796] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375804] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375812] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375820] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375829] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.189 [2024-04-26 15:49:21.375837] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.190 [2024-04-26 15:49:21.375845] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.190 [2024-04-26 15:49:21.375853] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.190 [2024-04-26 15:49:21.375861] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.190 [2024-04-26 15:49:21.375869] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.190 [2024-04-26 15:49:21.375878] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.190 [2024-04-26 15:49:21.375887] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.190 [2024-04-26 15:49:21.375895] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.190 [2024-04-26 15:49:21.375903] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.190 [2024-04-26 15:49:21.375912] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.190 [2024-04-26 15:49:21.375921] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2398770 is same with the state(5) to be set 00:30:51.190 [2024-04-26 15:49:21.377842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:84784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.190 [2024-04-26 15:49:21.377885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.190 [2024-04-26 15:49:21.377909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:84976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.190 [2024-04-26 15:49:21.377921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.190 [2024-04-26 15:49:21.377935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:84984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.190 [2024-04-26 15:49:21.377945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.190 [2024-04-26 15:49:21.377956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:84992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.190 [2024-04-26 15:49:21.377966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.190 [2024-04-26 15:49:21.377977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:85000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.190 [2024-04-26 15:49:21.377987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.190 [2024-04-26 15:49:21.377998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:85008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.190 [2024-04-26 15:49:21.378008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.190 [2024-04-26 15:49:21.378019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:85016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.190 [2024-04-26 15:49:21.378028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.190 [2024-04-26 15:49:21.378040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:85024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.190 [2024-04-26 15:49:21.378049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.190 [2024-04-26 15:49:21.378060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:85032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.190 [2024-04-26 15:49:21.378069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.190 [2024-04-26 15:49:21.378081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.190 [2024-04-26 15:49:21.378090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.190 [2024-04-26 15:49:21.378101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:85048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.190 [2024-04-26 15:49:21.378110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.190 [2024-04-26 15:49:21.378121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:85056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.190 [2024-04-26 15:49:21.378130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.190 [2024-04-26 15:49:21.378154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:85064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.190 [2024-04-26 15:49:21.378164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.190 [2024-04-26 15:49:21.378175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:85072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.190 [2024-04-26 15:49:21.378185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.190 [2024-04-26 15:49:21.378196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:84792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.190 [2024-04-26 15:49:21.378205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.190 [2024-04-26 15:49:21.378216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:84800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.190 [2024-04-26 15:49:21.378225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.190 [2024-04-26 15:49:21.378237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:84808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.190 [2024-04-26 15:49:21.378247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.190 [2024-04-26 15:49:21.378259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:84816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.190 [2024-04-26 15:49:21.378269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.190 [2024-04-26 15:49:21.378281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:84824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.190 [2024-04-26 15:49:21.378290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.190 [2024-04-26 15:49:21.378301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:84832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.190 [2024-04-26 15:49:21.378310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.190 [2024-04-26 15:49:21.378321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:84840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.190 [2024-04-26 15:49:21.378330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.190 [2024-04-26 15:49:21.378342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:84848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:51.190 [2024-04-26 15:49:21.378351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.190 [2024-04-26 15:49:21.378361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:85080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.190 [2024-04-26 15:49:21.378370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.190 [2024-04-26 15:49:21.378381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:85088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.190 [2024-04-26 15:49:21.378390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.190 [2024-04-26 15:49:21.378401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:85096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.190 [2024-04-26 15:49:21.378410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.190 [2024-04-26 15:49:21.378421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:85104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.190 [2024-04-26 15:49:21.378431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.190 [2024-04-26 15:49:21.378442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:85112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.190 [2024-04-26 15:49:21.378451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.190 [2024-04-26 15:49:21.378462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:85120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.191 [2024-04-26 15:49:21.378471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.191 [2024-04-26 15:49:21.378483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:85128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.191 [2024-04-26 15:49:21.378492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.191 [2024-04-26 15:49:21.378503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:85136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.191 [2024-04-26 15:49:21.378513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.191 [2024-04-26 15:49:21.378524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:85144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.191 [2024-04-26 15:49:21.378533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.191 [2024-04-26 15:49:21.378544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:85152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.191 [2024-04-26 15:49:21.378553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.191 [2024-04-26 15:49:21.378564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:85160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.191 [2024-04-26 15:49:21.378582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.191 [2024-04-26 15:49:21.378593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.191 [2024-04-26 15:49:21.378603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.191 [2024-04-26 15:49:21.378613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:85176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.191 [2024-04-26 15:49:21.378622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.191 [2024-04-26 15:49:21.378633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:85184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.191 [2024-04-26 15:49:21.378643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.191 [2024-04-26 15:49:21.378653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.191 [2024-04-26 15:49:21.378663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.191 [2024-04-26 15:49:21.378674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:85200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.191 [2024-04-26 15:49:21.378684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.191 [2024-04-26 15:49:21.378695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:85208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.191 [2024-04-26 15:49:21.378704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.191 [2024-04-26 15:49:21.378715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:85216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.191 [2024-04-26 15:49:21.378724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.191 [2024-04-26 15:49:21.378735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:85224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.191 [2024-04-26 15:49:21.378745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.191 [2024-04-26 15:49:21.378764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:85232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.191 [2024-04-26 15:49:21.378774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.191 [2024-04-26 15:49:21.378785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:85240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.191 [2024-04-26 15:49:21.378794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.191 [2024-04-26 15:49:21.378805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:85248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.191 [2024-04-26 15:49:21.378814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.191 [2024-04-26 15:49:21.378826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:85256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.191 [2024-04-26 15:49:21.378835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.191 [2024-04-26 15:49:21.378846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:85264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.191 [2024-04-26 15:49:21.378855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.191 [2024-04-26 15:49:21.378866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:85272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.191 [2024-04-26 15:49:21.378879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.191 [2024-04-26 15:49:21.378890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:85280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.191 [2024-04-26 15:49:21.378899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.191 [2024-04-26 15:49:21.378910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:85288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.191 [2024-04-26 15:49:21.378924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.191 [2024-04-26 15:49:21.378935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:85296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.191 [2024-04-26 15:49:21.378944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.191 [2024-04-26 15:49:21.378955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:85304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.191 [2024-04-26 15:49:21.378964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.191 [2024-04-26 15:49:21.378975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:85312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.191 [2024-04-26 15:49:21.378984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.191 [2024-04-26 15:49:21.378995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:85320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.191 [2024-04-26 15:49:21.379004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.191 [2024-04-26 15:49:21.379015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:85328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.191 [2024-04-26 15:49:21.379024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.191 [2024-04-26 15:49:21.379035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:85336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.191 [2024-04-26 15:49:21.379045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.191 [2024-04-26 15:49:21.379056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:85344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.191 [2024-04-26 15:49:21.379065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.191 [2024-04-26 15:49:21.379075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:85352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.191 [2024-04-26 15:49:21.379085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.191 [2024-04-26 15:49:21.379101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.191 [2024-04-26 15:49:21.379110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.191 [2024-04-26 15:49:21.379121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:85368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.191 [2024-04-26 15:49:21.379130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.191 [2024-04-26 15:49:21.379151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:85376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.191 [2024-04-26 15:49:21.379162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.191 [2024-04-26 15:49:21.379173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:85384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.191 [2024-04-26 15:49:21.379182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.191 [2024-04-26 15:49:21.379193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:85392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.191 [2024-04-26 15:49:21.379203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.191 [2024-04-26 15:49:21.379214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:85400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.192 [2024-04-26 15:49:21.379223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.192 [2024-04-26 15:49:21.379234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:85408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.192 [2024-04-26 15:49:21.379243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.192 [2024-04-26 15:49:21.379254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:85416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.192 [2024-04-26 15:49:21.379268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.192 [2024-04-26 15:49:21.379279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:85424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.192 [2024-04-26 15:49:21.379288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.192 [2024-04-26 15:49:21.379299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:85432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.192 [2024-04-26 15:49:21.379308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.192 [2024-04-26 15:49:21.379319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.192 [2024-04-26 15:49:21.379329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.192 [2024-04-26 15:49:21.379340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:85448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.192 [2024-04-26 15:49:21.379349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.192 [2024-04-26 15:49:21.379360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.192 [2024-04-26 15:49:21.379369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.192 [2024-04-26 15:49:21.379380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:85464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.192 [2024-04-26 15:49:21.379389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.192 [2024-04-26 15:49:21.379400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:85472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.192 [2024-04-26 15:49:21.379409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.192 [2024-04-26 15:49:21.379423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:85480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.192 [2024-04-26 15:49:21.379433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.192 [2024-04-26 15:49:21.379448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:85488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.192 [2024-04-26 15:49:21.379458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.192 [2024-04-26 15:49:21.379469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:85496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.192 [2024-04-26 15:49:21.379478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.192 [2024-04-26 15:49:21.379489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:85504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.192 [2024-04-26 15:49:21.379498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.192 [2024-04-26 15:49:21.379509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:85512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.192 [2024-04-26 15:49:21.379519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.192 [2024-04-26 15:49:21.379529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:85520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.192 [2024-04-26 15:49:21.379539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.192 [2024-04-26 15:49:21.379549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:85528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.192 [2024-04-26 15:49:21.379558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.192 [2024-04-26 15:49:21.379569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:85536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.192 [2024-04-26 15:49:21.379578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.192 [2024-04-26 15:49:21.379589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:85544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.192 [2024-04-26 15:49:21.379603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.192 [2024-04-26 15:49:21.379614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:85552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.192 [2024-04-26 15:49:21.379623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.192 [2024-04-26 15:49:21.379635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:85560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.192 [2024-04-26 15:49:21.379645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.192 [2024-04-26 15:49:21.379655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:85568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.192 [2024-04-26 15:49:21.379665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.192 [2024-04-26 15:49:21.379676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:85576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.192 [2024-04-26 15:49:21.379685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.192 [2024-04-26 15:49:21.379696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:85584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.192 [2024-04-26 15:49:21.379705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.192 [2024-04-26 15:49:21.379716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:85592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:30:51.192 [2024-04-26 15:49:21.379725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.192 [2024-04-26 15:49:21.379754] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:51.192 [2024-04-26 15:49:21.379765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85600 len:8 PRP1 0x0 PRP2 0x0 00:30:51.192 [2024-04-26 15:49:21.379775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.192 [2024-04-26 15:49:21.379788] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:51.192 [2024-04-26 15:49:21.379801] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:51.192 [2024-04-26 15:49:21.379810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85608 len:8 PRP1 0x0 PRP2 0x0 00:30:51.192 [2024-04-26 15:49:21.379819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.192 [2024-04-26 15:49:21.379828] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:51.192 [2024-04-26 15:49:21.379836] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:51.192 [2024-04-26 15:49:21.379844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85616 len:8 PRP1 0x0 PRP2 0x0 00:30:51.192 [2024-04-26 15:49:21.379853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.192 [2024-04-26 15:49:21.379863] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:51.192 [2024-04-26 15:49:21.379870] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:51.192 [2024-04-26 15:49:21.379877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85624 len:8 PRP1 0x0 PRP2 0x0 00:30:51.192 [2024-04-26 15:49:21.379886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.192 [2024-04-26 15:49:21.379895] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:51.192 [2024-04-26 15:49:21.379903] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:51.192 [2024-04-26 15:49:21.379911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85632 len:8 PRP1 0x0 PRP2 0x0 00:30:51.192 [2024-04-26 15:49:21.379920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.192 [2024-04-26 15:49:21.379934] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:51.192 [2024-04-26 15:49:21.379942] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:51.192 [2024-04-26 15:49:21.379950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85640 len:8 PRP1 0x0 PRP2 0x0 00:30:51.192 [2024-04-26 15:49:21.379959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.192 [2024-04-26 15:49:21.379967] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:51.193 [2024-04-26 15:49:21.379975] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:51.193 [2024-04-26 15:49:21.379982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85648 len:8 PRP1 0x0 PRP2 0x0 00:30:51.193 [2024-04-26 15:49:21.379991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.193 [2024-04-26 15:49:21.380000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:51.193 [2024-04-26 15:49:21.380007] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:51.193 [2024-04-26 15:49:21.380015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85656 len:8 PRP1 0x0 PRP2 0x0 00:30:51.193 [2024-04-26 15:49:21.380024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.193 [2024-04-26 15:49:21.380033] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:51.193 [2024-04-26 15:49:21.380041] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:51.193 [2024-04-26 15:49:21.380048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85664 len:8 PRP1 0x0 PRP2 0x0 00:30:51.193 [2024-04-26 15:49:21.380057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.193 [2024-04-26 15:49:21.380066] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:51.193 [2024-04-26 15:49:21.380077] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:51.193 [2024-04-26 15:49:21.380085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85672 len:8 PRP1 0x0 PRP2 0x0 00:30:51.193 [2024-04-26 15:49:21.380095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.193 [2024-04-26 15:49:21.380104] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:51.193 [2024-04-26 15:49:21.380112] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:51.193 [2024-04-26 15:49:21.380120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85680 len:8 PRP1 0x0 PRP2 0x0 00:30:51.193 [2024-04-26 15:49:21.380129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.193 [2024-04-26 15:49:21.380149] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:51.193 [2024-04-26 15:49:21.380159] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:51.193 [2024-04-26 15:49:21.380167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85688 len:8 PRP1 0x0 PRP2 0x0 00:30:51.193 [2024-04-26 15:49:21.380176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.193 [2024-04-26 15:49:21.380186] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:51.193 [2024-04-26 15:49:21.380193] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:51.193 [2024-04-26 15:49:21.380201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85696 len:8 PRP1 0x0 PRP2 0x0 00:30:51.193 [2024-04-26 15:49:21.380210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.193 [2024-04-26 15:49:21.380224] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:51.193 [2024-04-26 15:49:21.380232] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:51.193 [2024-04-26 15:49:21.380240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85704 len:8 PRP1 0x0 PRP2 0x0 00:30:51.193 [2024-04-26 15:49:21.380249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.193 [2024-04-26 15:49:21.380258] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:51.193 [2024-04-26 15:49:21.380265] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:51.193 [2024-04-26 15:49:21.380273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85712 len:8 PRP1 0x0 PRP2 0x0 00:30:51.193 [2024-04-26 15:49:21.380282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.193 [2024-04-26 15:49:21.380291] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:51.193 [2024-04-26 15:49:21.380298] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:51.193 [2024-04-26 15:49:21.380306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85720 len:8 PRP1 0x0 PRP2 0x0 00:30:51.193 [2024-04-26 15:49:21.380315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.193 [2024-04-26 15:49:21.380324] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:51.193 [2024-04-26 15:49:21.380331] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:51.193 [2024-04-26 15:49:21.380339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85728 len:8 PRP1 0x0 PRP2 0x0 00:30:51.193 [2024-04-26 15:49:21.380361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.193 [2024-04-26 15:49:21.380371] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:51.193 [2024-04-26 15:49:21.380384] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:51.193 [2024-04-26 15:49:21.380392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85736 len:8 PRP1 0x0 PRP2 0x0 00:30:51.193 [2024-04-26 15:49:21.380401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.193 [2024-04-26 15:49:21.380410] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:51.193 [2024-04-26 15:49:21.380417] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:51.193 [2024-04-26 15:49:21.380425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85744 len:8 PRP1 0x0 PRP2 0x0 00:30:51.193 [2024-04-26 15:49:21.380435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.193 [2024-04-26 15:49:21.380444] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:51.193 [2024-04-26 15:49:21.380451] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:51.193 [2024-04-26 15:49:21.380459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85752 len:8 PRP1 0x0 PRP2 0x0 00:30:51.193 [2024-04-26 15:49:21.380468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.193 [2024-04-26 15:49:21.380478] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:51.193 [2024-04-26 15:49:21.380485] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:51.193 [2024-04-26 15:49:21.380493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85760 len:8 PRP1 0x0 PRP2 0x0 00:30:51.193 [2024-04-26 15:49:21.380513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.193 [2024-04-26 15:49:21.380527] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:51.193 [2024-04-26 15:49:21.380535] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:51.193 [2024-04-26 15:49:21.380543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85768 len:8 PRP1 0x0 PRP2 0x0 00:30:51.193 [2024-04-26 15:49:21.380551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.193 [2024-04-26 15:49:21.380560] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:51.193 [2024-04-26 15:49:21.380568] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:51.194 [2024-04-26 15:49:21.380575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85776 len:8 PRP1 0x0 PRP2 0x0 00:30:51.194 [2024-04-26 15:49:21.380589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.194 [2024-04-26 15:49:21.380599] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:51.194 [2024-04-26 15:49:21.380606] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:51.194 [2024-04-26 15:49:21.380614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85784 len:8 PRP1 0x0 PRP2 0x0 00:30:51.194 [2024-04-26 15:49:21.380622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.194 [2024-04-26 15:49:21.380632] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:51.194 [2024-04-26 15:49:21.380639] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:51.194 [2024-04-26 15:49:21.380647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85792 len:8 PRP1 0x0 PRP2 0x0 00:30:51.194 [2024-04-26 15:49:21.380656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.194 [2024-04-26 15:49:21.380665] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:51.194 [2024-04-26 15:49:21.380677] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:51.194 [2024-04-26 15:49:21.380685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85800 len:8 PRP1 0x0 PRP2 0x0 00:30:51.194 [2024-04-26 15:49:21.380694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.194 [2024-04-26 15:49:21.380704] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:51.194 [2024-04-26 15:49:21.380711] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:51.194 [2024-04-26 15:49:21.389720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84856 len:8 PRP1 0x0 PRP2 0x0 00:30:51.194 [2024-04-26 15:49:21.389758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.194 [2024-04-26 15:49:21.389777] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:51.194 [2024-04-26 15:49:21.389785] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:51.194 [2024-04-26 15:49:21.389794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84864 len:8 PRP1 0x0 PRP2 0x0 00:30:51.194 [2024-04-26 15:49:21.389803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.194 [2024-04-26 15:49:21.389813] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:51.194 [2024-04-26 15:49:21.389821] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:51.194 [2024-04-26 15:49:21.389829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84872 len:8 PRP1 0x0 PRP2 0x0 00:30:51.194 [2024-04-26 15:49:21.389838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.194 [2024-04-26 15:49:21.389848] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:51.194 [2024-04-26 15:49:21.389856] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:51.194 [2024-04-26 15:49:21.389864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84880 len:8 PRP1 0x0 PRP2 0x0 00:30:51.194 [2024-04-26 15:49:21.389872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.194 [2024-04-26 15:49:21.389881] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:51.194 [2024-04-26 15:49:21.389888] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:51.194 [2024-04-26 15:49:21.389896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84888 len:8 PRP1 0x0 PRP2 0x0 00:30:51.194 [2024-04-26 15:49:21.389905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.194 [2024-04-26 15:49:21.389914] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:51.194 [2024-04-26 15:49:21.389921] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:51.194 [2024-04-26 15:49:21.389928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84896 len:8 PRP1 0x0 PRP2 0x0 00:30:51.194 [2024-04-26 15:49:21.389937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.194 [2024-04-26 15:49:21.389945] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:51.194 [2024-04-26 15:49:21.389953] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:51.194 [2024-04-26 15:49:21.389960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84904 len:8 PRP1 0x0 PRP2 0x0 00:30:51.194 [2024-04-26 15:49:21.389969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.194 [2024-04-26 15:49:21.389978] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:51.194 [2024-04-26 15:49:21.389985] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:51.194 [2024-04-26 15:49:21.389993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84912 len:8 PRP1 0x0 PRP2 0x0 00:30:51.194 [2024-04-26 15:49:21.390002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.194 [2024-04-26 15:49:21.390011] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:51.194 [2024-04-26 15:49:21.390018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:51.194 [2024-04-26 15:49:21.390025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84920 len:8 PRP1 0x0 PRP2 0x0 00:30:51.194 [2024-04-26 15:49:21.390034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.194 [2024-04-26 15:49:21.390043] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:51.194 [2024-04-26 15:49:21.390050] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:51.194 [2024-04-26 15:49:21.390057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84928 len:8 PRP1 0x0 PRP2 0x0 00:30:51.194 [2024-04-26 15:49:21.390066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.194 [2024-04-26 15:49:21.390075] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:51.194 [2024-04-26 15:49:21.390082] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:51.194 [2024-04-26 15:49:21.390090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84936 len:8 PRP1 0x0 PRP2 0x0 00:30:51.194 [2024-04-26 15:49:21.390099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.194 [2024-04-26 15:49:21.390109] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:51.194 [2024-04-26 15:49:21.390116] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:51.194 [2024-04-26 15:49:21.390124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84944 len:8 PRP1 0x0 PRP2 0x0 00:30:51.194 [2024-04-26 15:49:21.390133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.194 [2024-04-26 15:49:21.390156] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:51.194 [2024-04-26 15:49:21.390164] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:51.194 [2024-04-26 15:49:21.390172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84952 len:8 PRP1 0x0 PRP2 0x0 00:30:51.194 [2024-04-26 15:49:21.390181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.194 [2024-04-26 15:49:21.390190] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:51.194 [2024-04-26 15:49:21.390197] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:51.194 [2024-04-26 15:49:21.390205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84960 len:8 PRP1 0x0 PRP2 0x0 00:30:51.194 [2024-04-26 15:49:21.390214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.194 [2024-04-26 15:49:21.390223] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:30:51.194 [2024-04-26 15:49:21.390230] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:30:51.194 [2024-04-26 15:49:21.390237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:84968 len:8 PRP1 0x0 PRP2 0x0 00:30:51.194 [2024-04-26 15:49:21.390246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.194 [2024-04-26 15:49:21.390320] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x94aaf0 was disconnected and freed. reset controller. 00:30:51.195 [2024-04-26 15:49:21.390425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:51.195 [2024-04-26 15:49:21.390442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.195 [2024-04-26 15:49:21.390455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:51.195 [2024-04-26 15:49:21.390464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.195 [2024-04-26 15:49:21.390474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:51.195 [2024-04-26 15:49:21.390484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.195 [2024-04-26 15:49:21.390493] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:51.195 [2024-04-26 15:49:21.390502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:51.195 [2024-04-26 15:49:21.390512] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8dbdc0 is same with the state(5) to be set 00:30:51.195 [2024-04-26 15:49:21.390749] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.195 [2024-04-26 15:49:21.390785] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8dbdc0 (9): Bad file descriptor 00:30:51.195 [2024-04-26 15:49:21.390892] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.195 [2024-04-26 15:49:21.390942] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:51.195 [2024-04-26 15:49:21.390960] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbdc0 with addr=10.0.0.2, port=4420 00:30:51.195 [2024-04-26 15:49:21.390971] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8dbdc0 is same with the state(5) to be set 00:30:51.195 [2024-04-26 15:49:21.390989] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8dbdc0 (9): Bad file descriptor 00:30:51.195 [2024-04-26 15:49:21.391006] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:51.195 [2024-04-26 15:49:21.391016] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:51.195 [2024-04-26 15:49:21.391026] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:51.195 [2024-04-26 15:49:21.391046] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:51.195 [2024-04-26 15:49:21.391057] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:51.195 15:49:21 -- host/timeout.sh@90 -- # sleep 1 00:30:52.124 [2024-04-26 15:49:22.391199] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.124 [2024-04-26 15:49:22.391293] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:52.124 [2024-04-26 15:49:22.391314] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbdc0 with addr=10.0.0.2, port=4420 00:30:52.124 [2024-04-26 15:49:22.391329] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8dbdc0 is same with the state(5) to be set 00:30:52.124 [2024-04-26 15:49:22.391356] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8dbdc0 (9): Bad file descriptor 00:30:52.124 [2024-04-26 15:49:22.391376] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:30:52.124 [2024-04-26 15:49:22.391387] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:30:52.124 [2024-04-26 15:49:22.391399] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:30:52.124 [2024-04-26 15:49:22.391427] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:30:52.124 [2024-04-26 15:49:22.391439] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:30:52.124 15:49:22 -- host/timeout.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:52.380 [2024-04-26 15:49:22.665824] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:52.635 15:49:22 -- host/timeout.sh@92 -- # wait 88807 00:30:53.199 [2024-04-26 15:49:23.405019] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:01.359 00:31:01.359 Latency(us) 00:31:01.359 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:01.359 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:31:01.359 Verification LBA range: start 0x0 length 0x4000 00:31:01.359 NVMe0n1 : 10.01 6337.81 24.76 0.00 0.00 20159.87 2129.92 3035150.89 00:31:01.359 =================================================================================================================== 00:31:01.359 Total : 6337.81 24.76 0.00 0.00 20159.87 2129.92 3035150.89 00:31:01.359 0 00:31:01.359 15:49:30 -- host/timeout.sh@97 -- # rpc_pid=88925 00:31:01.359 15:49:30 -- host/timeout.sh@98 -- # sleep 1 00:31:01.359 15:49:30 -- host/timeout.sh@96 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:01.359 Running I/O for 10 seconds... 00:31:01.359 15:49:31 -- host/timeout.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:01.359 [2024-04-26 15:49:31.497217] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f0e50 is same with the state(5) to be set 00:31:01.359 [2024-04-26 15:49:31.497275] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f0e50 is same with the state(5) to be set 00:31:01.359 [2024-04-26 15:49:31.497288] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f0e50 is same with the state(5) to be set 00:31:01.359 [2024-04-26 15:49:31.497297] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f0e50 is same with the state(5) to be set 00:31:01.359 [2024-04-26 15:49:31.497305] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f0e50 is same with the state(5) to be set 00:31:01.359 [2024-04-26 15:49:31.497314] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f0e50 is same with the state(5) to be set 00:31:01.359 [2024-04-26 15:49:31.497322] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f0e50 is same with the state(5) to be set 00:31:01.359 [2024-04-26 15:49:31.497330] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f0e50 is same with the state(5) to be set 00:31:01.359 [2024-04-26 15:49:31.497338] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f0e50 is same with the state(5) to be set 00:31:01.359 [2024-04-26 15:49:31.497346] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f0e50 is same with the state(5) to be set 00:31:01.359 [2024-04-26 15:49:31.497354] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f0e50 is same with the state(5) to be set 00:31:01.359 [2024-04-26 15:49:31.497362] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f0e50 is same with the state(5) to be set 00:31:01.359 [2024-04-26 15:49:31.497371] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f0e50 is same with the state(5) to be set 00:31:01.359 [2024-04-26 15:49:31.497379] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f0e50 is same with the state(5) to be set 00:31:01.359 [2024-04-26 15:49:31.497387] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f0e50 is same with the state(5) to be set 00:31:01.359 [2024-04-26 15:49:31.497395] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f0e50 is same with the state(5) to be set 00:31:01.359 [2024-04-26 15:49:31.497403] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f0e50 is same with the state(5) to be set 00:31:01.360 [2024-04-26 15:49:31.497411] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f0e50 is same with the state(5) to be set 00:31:01.360 [2024-04-26 15:49:31.497419] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f0e50 is same with the state(5) to be set 00:31:01.360 [2024-04-26 15:49:31.497426] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f0e50 is same with the state(5) to be set 00:31:01.360 [2024-04-26 15:49:31.497434] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f0e50 is same with the state(5) to be set 00:31:01.360 [2024-04-26 15:49:31.497442] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f0e50 is same with the state(5) to be set 00:31:01.360 [2024-04-26 15:49:31.497458] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f0e50 is same with the state(5) to be set 00:31:01.360 [2024-04-26 15:49:31.497466] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f0e50 is same with the state(5) to be set 00:31:01.360 [2024-04-26 15:49:31.497474] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f0e50 is same with the state(5) to be set 00:31:01.360 [2024-04-26 15:49:31.497482] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f0e50 is same with the state(5) to be set 00:31:01.360 [2024-04-26 15:49:31.497490] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f0e50 is same with the state(5) to be set 00:31:01.360 [2024-04-26 15:49:31.497499] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f0e50 is same with the state(5) to be set 00:31:01.360 [2024-04-26 15:49:31.497507] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f0e50 is same with the state(5) to be set 00:31:01.360 [2024-04-26 15:49:31.497515] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f0e50 is same with the state(5) to be set 00:31:01.360 [2024-04-26 15:49:31.498082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:81432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.360 [2024-04-26 15:49:31.498170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.360 [2024-04-26 15:49:31.498198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:81440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.360 [2024-04-26 15:49:31.498211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.360 [2024-04-26 15:49:31.498225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:81448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.360 [2024-04-26 15:49:31.498236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.360 [2024-04-26 15:49:31.498247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:81456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.360 [2024-04-26 15:49:31.498257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.360 [2024-04-26 15:49:31.498269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:81464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.360 [2024-04-26 15:49:31.498279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.360 [2024-04-26 15:49:31.498291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:81472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.360 [2024-04-26 15:49:31.498300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.360 [2024-04-26 15:49:31.498311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:81480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.360 [2024-04-26 15:49:31.498321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.360 [2024-04-26 15:49:31.498332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:81488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.360 [2024-04-26 15:49:31.498341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.360 [2024-04-26 15:49:31.498352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:81496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.360 [2024-04-26 15:49:31.498361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.360 [2024-04-26 15:49:31.498373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:81504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.360 [2024-04-26 15:49:31.498382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.360 [2024-04-26 15:49:31.498395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:81512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.360 [2024-04-26 15:49:31.498405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.360 [2024-04-26 15:49:31.498417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:81520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.360 [2024-04-26 15:49:31.498427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.360 [2024-04-26 15:49:31.498438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:81528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.360 [2024-04-26 15:49:31.498448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.360 [2024-04-26 15:49:31.498460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.360 [2024-04-26 15:49:31.498470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.360 [2024-04-26 15:49:31.498481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:81544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.360 [2024-04-26 15:49:31.498490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.360 [2024-04-26 15:49:31.498502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:81552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.360 [2024-04-26 15:49:31.498511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.360 [2024-04-26 15:49:31.498522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:81560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.360 [2024-04-26 15:49:31.498533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.360 [2024-04-26 15:49:31.498545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:81568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.360 [2024-04-26 15:49:31.498556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.360 [2024-04-26 15:49:31.498568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:81576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.360 [2024-04-26 15:49:31.498578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.360 [2024-04-26 15:49:31.498589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:81584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.360 [2024-04-26 15:49:31.498599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.360 [2024-04-26 15:49:31.498611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:81592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.360 [2024-04-26 15:49:31.498620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.360 [2024-04-26 15:49:31.498632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:81600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.360 [2024-04-26 15:49:31.498641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.360 [2024-04-26 15:49:31.498652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:81608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.360 [2024-04-26 15:49:31.498662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.360 [2024-04-26 15:49:31.498674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:81992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.360 [2024-04-26 15:49:31.498684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.360 [2024-04-26 15:49:31.498696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:82000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.360 [2024-04-26 15:49:31.498706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.360 [2024-04-26 15:49:31.498718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:82008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.360 [2024-04-26 15:49:31.498727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.360 [2024-04-26 15:49:31.498742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.360 [2024-04-26 15:49:31.498753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.360 [2024-04-26 15:49:31.498765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:82024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.360 [2024-04-26 15:49:31.498774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.360 [2024-04-26 15:49:31.498786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:82032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.360 [2024-04-26 15:49:31.498796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.360 [2024-04-26 15:49:31.498808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:82040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.360 [2024-04-26 15:49:31.498817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.360 [2024-04-26 15:49:31.498828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:82048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.360 [2024-04-26 15:49:31.498838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.360 [2024-04-26 15:49:31.498849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:82056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.360 [2024-04-26 15:49:31.498859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.360 [2024-04-26 15:49:31.498870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:82064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.360 [2024-04-26 15:49:31.498880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.361 [2024-04-26 15:49:31.498892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:82072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.361 [2024-04-26 15:49:31.498914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.361 [2024-04-26 15:49:31.498926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:82080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.361 [2024-04-26 15:49:31.498936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.361 [2024-04-26 15:49:31.498948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:82088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.361 [2024-04-26 15:49:31.498957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.361 [2024-04-26 15:49:31.498975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:82096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.361 [2024-04-26 15:49:31.498985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.361 [2024-04-26 15:49:31.498996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:82104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.361 [2024-04-26 15:49:31.499006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.361 [2024-04-26 15:49:31.499017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:82112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.361 [2024-04-26 15:49:31.499027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.361 [2024-04-26 15:49:31.499038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:82120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.361 [2024-04-26 15:49:31.499047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.361 [2024-04-26 15:49:31.499057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:82128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.361 [2024-04-26 15:49:31.499067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.361 [2024-04-26 15:49:31.499078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:81616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.361 [2024-04-26 15:49:31.499088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.361 [2024-04-26 15:49:31.499100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:81624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.361 [2024-04-26 15:49:31.499110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.361 [2024-04-26 15:49:31.499122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:81632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.361 [2024-04-26 15:49:31.499131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.361 [2024-04-26 15:49:31.499155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:81640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.361 [2024-04-26 15:49:31.499166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.361 [2024-04-26 15:49:31.499177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:81648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.361 [2024-04-26 15:49:31.499186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.361 [2024-04-26 15:49:31.499198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:81656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.361 [2024-04-26 15:49:31.499208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.361 [2024-04-26 15:49:31.499219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:81664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.361 [2024-04-26 15:49:31.499228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.361 [2024-04-26 15:49:31.499241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:81672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.361 [2024-04-26 15:49:31.499251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.361 [2024-04-26 15:49:31.499263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:81680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.361 [2024-04-26 15:49:31.499279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.361 [2024-04-26 15:49:31.499290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:81688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.361 [2024-04-26 15:49:31.499300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.361 [2024-04-26 15:49:31.499312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:81696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.361 [2024-04-26 15:49:31.499322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.361 [2024-04-26 15:49:31.499334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:81704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.361 [2024-04-26 15:49:31.499345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.361 [2024-04-26 15:49:31.499356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:81712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.361 [2024-04-26 15:49:31.499366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.361 [2024-04-26 15:49:31.499378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:81720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.361 [2024-04-26 15:49:31.499389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.361 [2024-04-26 15:49:31.499400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:81728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.361 [2024-04-26 15:49:31.499411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.361 [2024-04-26 15:49:31.499422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:81736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.361 [2024-04-26 15:49:31.499433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.361 [2024-04-26 15:49:31.499445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:81744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.361 [2024-04-26 15:49:31.499454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.361 [2024-04-26 15:49:31.499466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:81752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.361 [2024-04-26 15:49:31.499476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.361 [2024-04-26 15:49:31.499488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:81760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.361 [2024-04-26 15:49:31.499498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.361 [2024-04-26 15:49:31.499516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:81768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.361 [2024-04-26 15:49:31.499526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.361 [2024-04-26 15:49:31.499537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:81776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.361 [2024-04-26 15:49:31.499546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.361 [2024-04-26 15:49:31.499558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:81784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.361 [2024-04-26 15:49:31.499567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.361 [2024-04-26 15:49:31.499578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:81792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.361 [2024-04-26 15:49:31.499588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.361 [2024-04-26 15:49:31.499599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:81800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.361 [2024-04-26 15:49:31.499609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.361 [2024-04-26 15:49:31.499620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:81808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.361 [2024-04-26 15:49:31.499636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.361 [2024-04-26 15:49:31.499648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:81816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.361 [2024-04-26 15:49:31.499657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.361 [2024-04-26 15:49:31.499668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:81824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.361 [2024-04-26 15:49:31.499678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.361 [2024-04-26 15:49:31.499691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:81832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.361 [2024-04-26 15:49:31.499701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.361 [2024-04-26 15:49:31.499712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:81840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.361 [2024-04-26 15:49:31.499722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.361 [2024-04-26 15:49:31.499734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:81848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.361 [2024-04-26 15:49:31.499744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.361 [2024-04-26 15:49:31.499755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:81856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.361 [2024-04-26 15:49:31.499765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.361 [2024-04-26 15:49:31.499776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:81864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.362 [2024-04-26 15:49:31.499801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.362 [2024-04-26 15:49:31.499812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:81872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.362 [2024-04-26 15:49:31.499822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.362 [2024-04-26 15:49:31.499833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:81880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.362 [2024-04-26 15:49:31.499842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.362 [2024-04-26 15:49:31.499854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:81888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.362 [2024-04-26 15:49:31.499863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.362 [2024-04-26 15:49:31.499874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:81896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.362 [2024-04-26 15:49:31.499884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.362 [2024-04-26 15:49:31.499895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:81904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.362 [2024-04-26 15:49:31.499904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.362 [2024-04-26 15:49:31.499915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:81912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.362 [2024-04-26 15:49:31.499924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.362 [2024-04-26 15:49:31.499935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:81920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.362 [2024-04-26 15:49:31.499956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.362 [2024-04-26 15:49:31.499967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:82136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.362 [2024-04-26 15:49:31.499977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.362 [2024-04-26 15:49:31.499988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:82144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.362 [2024-04-26 15:49:31.500003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.362 [2024-04-26 15:49:31.500015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:82152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.362 [2024-04-26 15:49:31.500025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.362 [2024-04-26 15:49:31.500036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:82160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.362 [2024-04-26 15:49:31.500046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.362 [2024-04-26 15:49:31.500057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:82168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.362 [2024-04-26 15:49:31.500067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.362 [2024-04-26 15:49:31.500078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:82176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.362 [2024-04-26 15:49:31.500088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.362 [2024-04-26 15:49:31.500099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:82184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.362 [2024-04-26 15:49:31.500109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.362 [2024-04-26 15:49:31.500121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:82192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.362 [2024-04-26 15:49:31.500141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.362 [2024-04-26 15:49:31.500152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:82200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.362 [2024-04-26 15:49:31.500171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.362 [2024-04-26 15:49:31.500183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:82208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.362 [2024-04-26 15:49:31.500193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.362 [2024-04-26 15:49:31.500205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:82216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.362 [2024-04-26 15:49:31.500216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.362 [2024-04-26 15:49:31.500227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:82224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.362 [2024-04-26 15:49:31.500237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.362 [2024-04-26 15:49:31.500248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:82232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.362 [2024-04-26 15:49:31.500258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.362 [2024-04-26 15:49:31.500269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:82240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.362 [2024-04-26 15:49:31.500286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.362 [2024-04-26 15:49:31.500297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:82248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.362 [2024-04-26 15:49:31.500307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.362 [2024-04-26 15:49:31.500318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:82256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.362 [2024-04-26 15:49:31.500327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.362 [2024-04-26 15:49:31.500339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:82264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.362 [2024-04-26 15:49:31.500360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.362 [2024-04-26 15:49:31.500372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:82272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.362 [2024-04-26 15:49:31.500387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.362 [2024-04-26 15:49:31.500399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:82280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.362 [2024-04-26 15:49:31.500409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.362 [2024-04-26 15:49:31.500421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:82288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.362 [2024-04-26 15:49:31.500430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.362 [2024-04-26 15:49:31.500441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:82296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.362 [2024-04-26 15:49:31.500451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.362 [2024-04-26 15:49:31.500463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:82304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.362 [2024-04-26 15:49:31.500472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.362 [2024-04-26 15:49:31.500483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:82312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.362 [2024-04-26 15:49:31.500493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.362 [2024-04-26 15:49:31.500504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:82320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.362 [2024-04-26 15:49:31.500514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.362 [2024-04-26 15:49:31.500525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:82328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.362 [2024-04-26 15:49:31.500535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.362 [2024-04-26 15:49:31.500546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:82336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.362 [2024-04-26 15:49:31.500556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.362 [2024-04-26 15:49:31.500576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:82344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.362 [2024-04-26 15:49:31.500586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.362 [2024-04-26 15:49:31.500597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:82352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.362 [2024-04-26 15:49:31.500607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.362 [2024-04-26 15:49:31.500619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:82360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.362 [2024-04-26 15:49:31.500628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.362 [2024-04-26 15:49:31.500639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:82368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.362 [2024-04-26 15:49:31.500649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.362 [2024-04-26 15:49:31.500660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:82376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.362 [2024-04-26 15:49:31.500670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.362 [2024-04-26 15:49:31.500681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:82384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.363 [2024-04-26 15:49:31.500691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.363 [2024-04-26 15:49:31.500729] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:01.363 [2024-04-26 15:49:31.500742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82392 len:8 PRP1 0x0 PRP2 0x0 00:31:01.363 [2024-04-26 15:49:31.500752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.363 [2024-04-26 15:49:31.500773] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:01.363 [2024-04-26 15:49:31.500781] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:01.363 [2024-04-26 15:49:31.500789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82400 len:8 PRP1 0x0 PRP2 0x0 00:31:01.363 [2024-04-26 15:49:31.500799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.363 [2024-04-26 15:49:31.500808] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:01.363 [2024-04-26 15:49:31.500816] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:01.363 [2024-04-26 15:49:31.500824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82408 len:8 PRP1 0x0 PRP2 0x0 00:31:01.363 [2024-04-26 15:49:31.500834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.363 [2024-04-26 15:49:31.500844] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:01.363 [2024-04-26 15:49:31.500852] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:01.363 [2024-04-26 15:49:31.500860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82416 len:8 PRP1 0x0 PRP2 0x0 00:31:01.363 [2024-04-26 15:49:31.500869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.363 [2024-04-26 15:49:31.500878] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:01.363 [2024-04-26 15:49:31.500885] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:01.363 [2024-04-26 15:49:31.500893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82424 len:8 PRP1 0x0 PRP2 0x0 00:31:01.363 [2024-04-26 15:49:31.500902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.363 [2024-04-26 15:49:31.500911] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:01.363 [2024-04-26 15:49:31.500920] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:01.363 [2024-04-26 15:49:31.500928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82432 len:8 PRP1 0x0 PRP2 0x0 00:31:01.363 [2024-04-26 15:49:31.500937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.363 [2024-04-26 15:49:31.500946] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:01.363 [2024-04-26 15:49:31.500954] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:01.363 [2024-04-26 15:49:31.500963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82440 len:8 PRP1 0x0 PRP2 0x0 00:31:01.363 [2024-04-26 15:49:31.500972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.363 [2024-04-26 15:49:31.500981] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:01.363 [2024-04-26 15:49:31.500988] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:01.363 [2024-04-26 15:49:31.500996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82448 len:8 PRP1 0x0 PRP2 0x0 00:31:01.363 [2024-04-26 15:49:31.501006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.363 [2024-04-26 15:49:31.501016] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:01.363 [2024-04-26 15:49:31.501023] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:01.363 [2024-04-26 15:49:31.501031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81928 len:8 PRP1 0x0 PRP2 0x0 00:31:01.363 [2024-04-26 15:49:31.501040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.363 [2024-04-26 15:49:31.501056] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:01.363 [2024-04-26 15:49:31.501064] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:01.363 [2024-04-26 15:49:31.501071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81936 len:8 PRP1 0x0 PRP2 0x0 00:31:01.363 [2024-04-26 15:49:31.501080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.363 [2024-04-26 15:49:31.501091] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:01.363 [2024-04-26 15:49:31.501099] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:01.363 [2024-04-26 15:49:31.501108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81944 len:8 PRP1 0x0 PRP2 0x0 00:31:01.363 [2024-04-26 15:49:31.501117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.363 [2024-04-26 15:49:31.501127] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:01.363 [2024-04-26 15:49:31.501145] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:01.363 [2024-04-26 15:49:31.501155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81952 len:8 PRP1 0x0 PRP2 0x0 00:31:01.363 [2024-04-26 15:49:31.501164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.363 [2024-04-26 15:49:31.501175] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:01.363 [2024-04-26 15:49:31.501182] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:01.363 [2024-04-26 15:49:31.501190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81960 len:8 PRP1 0x0 PRP2 0x0 00:31:01.363 [2024-04-26 15:49:31.501200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.363 [2024-04-26 15:49:31.501209] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:01.363 [2024-04-26 15:49:31.501217] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:01.363 [2024-04-26 15:49:31.501225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81968 len:8 PRP1 0x0 PRP2 0x0 00:31:01.363 [2024-04-26 15:49:31.501234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.363 [2024-04-26 15:49:31.501243] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:01.363 [2024-04-26 15:49:31.501250] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:01.363 [2024-04-26 15:49:31.501258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81976 len:8 PRP1 0x0 PRP2 0x0 00:31:01.363 [2024-04-26 15:49:31.501268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.363 [2024-04-26 15:49:31.501278] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:01.363 [2024-04-26 15:49:31.501285] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:01.363 [2024-04-26 15:49:31.501293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:81984 len:8 PRP1 0x0 PRP2 0x0 00:31:01.363 [2024-04-26 15:49:31.501303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.363 [2024-04-26 15:49:31.501371] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x95a740 was disconnected and freed. reset controller. 00:31:01.363 [2024-04-26 15:49:31.501483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:01.363 [2024-04-26 15:49:31.501507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.363 [2024-04-26 15:49:31.501520] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:01.363 [2024-04-26 15:49:31.501536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.363 [2024-04-26 15:49:31.501546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:01.363 [2024-04-26 15:49:31.501556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.363 [2024-04-26 15:49:31.501567] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:01.363 [2024-04-26 15:49:31.501577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.363 [2024-04-26 15:49:31.501586] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8dbdc0 is same with the state(5) to be set 00:31:01.363 [2024-04-26 15:49:31.501802] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:01.363 [2024-04-26 15:49:31.501837] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8dbdc0 (9): Bad file descriptor 00:31:01.363 [2024-04-26 15:49:31.501958] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:01.363 [2024-04-26 15:49:31.502016] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:01.363 [2024-04-26 15:49:31.502033] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbdc0 with addr=10.0.0.2, port=4420 00:31:01.363 [2024-04-26 15:49:31.502045] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8dbdc0 is same with the state(5) to be set 00:31:01.363 [2024-04-26 15:49:31.502063] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8dbdc0 (9): Bad file descriptor 00:31:01.363 [2024-04-26 15:49:31.502085] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:01.363 [2024-04-26 15:49:31.502095] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:01.363 [2024-04-26 15:49:31.502107] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:01.363 [2024-04-26 15:49:31.502127] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:01.363 [2024-04-26 15:49:31.502153] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:01.363 15:49:31 -- host/timeout.sh@101 -- # sleep 3 00:31:02.299 [2024-04-26 15:49:32.502354] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:02.299 [2024-04-26 15:49:32.502482] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:02.299 [2024-04-26 15:49:32.502501] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbdc0 with addr=10.0.0.2, port=4420 00:31:02.299 [2024-04-26 15:49:32.502519] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8dbdc0 is same with the state(5) to be set 00:31:02.299 [2024-04-26 15:49:32.502550] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8dbdc0 (9): Bad file descriptor 00:31:02.299 [2024-04-26 15:49:32.502572] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:02.299 [2024-04-26 15:49:32.502584] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:02.299 [2024-04-26 15:49:32.502596] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:02.299 [2024-04-26 15:49:32.502632] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:02.299 [2024-04-26 15:49:32.502645] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:03.234 [2024-04-26 15:49:33.502879] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.234 [2024-04-26 15:49:33.503021] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:03.234 [2024-04-26 15:49:33.503042] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbdc0 with addr=10.0.0.2, port=4420 00:31:03.234 [2024-04-26 15:49:33.503060] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8dbdc0 is same with the state(5) to be set 00:31:03.234 [2024-04-26 15:49:33.503094] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8dbdc0 (9): Bad file descriptor 00:31:03.234 [2024-04-26 15:49:33.503123] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:03.234 [2024-04-26 15:49:33.503135] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:03.234 [2024-04-26 15:49:33.503165] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:03.234 [2024-04-26 15:49:33.503200] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:03.234 [2024-04-26 15:49:33.503215] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:04.609 [2024-04-26 15:49:34.506775] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.609 [2024-04-26 15:49:34.506898] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:04.609 [2024-04-26 15:49:34.506918] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8dbdc0 with addr=10.0.0.2, port=4420 00:31:04.609 [2024-04-26 15:49:34.506935] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8dbdc0 is same with the state(5) to be set 00:31:04.609 [2024-04-26 15:49:34.507218] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8dbdc0 (9): Bad file descriptor 00:31:04.609 [2024-04-26 15:49:34.507528] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:04.609 [2024-04-26 15:49:34.507566] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:04.609 [2024-04-26 15:49:34.507587] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:04.609 [2024-04-26 15:49:34.511499] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:04.609 [2024-04-26 15:49:34.511536] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:04.609 15:49:34 -- host/timeout.sh@102 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:04.609 [2024-04-26 15:49:34.787159] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:04.609 15:49:34 -- host/timeout.sh@103 -- # wait 88925 00:31:05.542 [2024-04-26 15:49:35.545198] bdev_nvme.c:2053:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:10.804 00:31:10.804 Latency(us) 00:31:10.804 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:10.804 Job: NVMe0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:31:10.804 Verification LBA range: start 0x0 length 0x4000 00:31:10.804 NVMe0n1 : 10.01 5420.99 21.18 3655.23 0.00 14076.38 666.53 3019898.88 00:31:10.804 =================================================================================================================== 00:31:10.804 Total : 5420.99 21.18 3655.23 0.00 14076.38 0.00 3019898.88 00:31:10.804 0 00:31:10.804 15:49:40 -- host/timeout.sh@105 -- # killprocess 88764 00:31:10.804 15:49:40 -- common/autotest_common.sh@936 -- # '[' -z 88764 ']' 00:31:10.804 15:49:40 -- common/autotest_common.sh@940 -- # kill -0 88764 00:31:10.804 15:49:40 -- common/autotest_common.sh@941 -- # uname 00:31:10.804 15:49:40 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:10.804 15:49:40 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88764 00:31:10.804 killing process with pid 88764 00:31:10.804 Received shutdown signal, test time was about 10.000000 seconds 00:31:10.804 00:31:10.804 Latency(us) 00:31:10.804 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:10.804 =================================================================================================================== 00:31:10.804 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:10.805 15:49:40 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:31:10.805 15:49:40 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:31:10.805 15:49:40 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88764' 00:31:10.805 15:49:40 -- common/autotest_common.sh@955 -- # kill 88764 00:31:10.805 15:49:40 -- common/autotest_common.sh@960 -- # wait 88764 00:31:10.805 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:10.805 15:49:40 -- host/timeout.sh@110 -- # bdevperf_pid=89057 00:31:10.805 15:49:40 -- host/timeout.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w randread -t 10 -f 00:31:10.805 15:49:40 -- host/timeout.sh@112 -- # waitforlisten 89057 /var/tmp/bdevperf.sock 00:31:10.805 15:49:40 -- common/autotest_common.sh@817 -- # '[' -z 89057 ']' 00:31:10.805 15:49:40 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:10.805 15:49:40 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:10.805 15:49:40 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:10.805 15:49:40 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:10.805 15:49:40 -- common/autotest_common.sh@10 -- # set +x 00:31:10.805 [2024-04-26 15:49:40.783402] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:31:10.805 [2024-04-26 15:49:40.783501] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89057 ] 00:31:10.805 [2024-04-26 15:49:40.918287] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:10.805 [2024-04-26 15:49:41.057662] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:11.738 15:49:41 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:11.738 15:49:41 -- common/autotest_common.sh@850 -- # return 0 00:31:11.738 15:49:41 -- host/timeout.sh@116 -- # dtrace_pid=89084 00:31:11.738 15:49:41 -- host/timeout.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/bpftrace.sh 89057 /home/vagrant/spdk_repo/spdk/scripts/bpf/nvmf_timeout.bt 00:31:11.738 15:49:41 -- host/timeout.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 -e 9 00:31:11.997 15:49:42 -- host/timeout.sh@120 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 --ctrlr-loss-timeout-sec 5 --reconnect-delay-sec 2 00:31:12.255 NVMe0n1 00:31:12.255 15:49:42 -- host/timeout.sh@124 -- # rpc_pid=89133 00:31:12.255 15:49:42 -- host/timeout.sh@123 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:12.255 15:49:42 -- host/timeout.sh@125 -- # sleep 1 00:31:12.255 Running I/O for 10 seconds... 00:31:13.190 15:49:43 -- host/timeout.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:13.460 [2024-04-26 15:49:43.666074] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666132] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666159] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666168] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666177] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666187] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666196] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666205] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666213] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666222] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666231] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666239] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666247] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666255] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666264] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666272] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666280] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666288] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666296] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666304] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666312] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666321] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666329] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666338] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666346] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666355] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666363] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666372] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666381] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666389] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666397] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666405] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666413] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666421] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666430] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666439] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666447] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666456] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666464] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666473] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666481] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666489] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666498] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666506] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666514] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666523] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666533] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666541] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666550] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666558] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666567] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666576] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666585] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666594] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666603] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666611] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666619] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666627] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666637] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666645] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666653] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666661] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666669] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666678] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666686] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666695] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666704] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666712] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666720] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666728] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666737] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666745] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.460 [2024-04-26 15:49:43.666753] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.461 [2024-04-26 15:49:43.666761] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.461 [2024-04-26 15:49:43.666770] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.461 [2024-04-26 15:49:43.666778] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.461 [2024-04-26 15:49:43.666785] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.461 [2024-04-26 15:49:43.666793] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.461 [2024-04-26 15:49:43.666801] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.461 [2024-04-26 15:49:43.666809] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.461 [2024-04-26 15:49:43.666817] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.461 [2024-04-26 15:49:43.666825] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.461 [2024-04-26 15:49:43.666833] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.461 [2024-04-26 15:49:43.666849] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.461 [2024-04-26 15:49:43.666857] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.461 [2024-04-26 15:49:43.666866] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.461 [2024-04-26 15:49:43.666874] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.461 [2024-04-26 15:49:43.666883] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.461 [2024-04-26 15:49:43.666892] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.461 [2024-04-26 15:49:43.666901] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.461 [2024-04-26 15:49:43.666909] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.461 [2024-04-26 15:49:43.666917] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.461 [2024-04-26 15:49:43.666925] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.461 [2024-04-26 15:49:43.666933] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.461 [2024-04-26 15:49:43.666941] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.461 [2024-04-26 15:49:43.666949] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.461 [2024-04-26 15:49:43.666957] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.461 [2024-04-26 15:49:43.666965] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.461 [2024-04-26 15:49:43.666975] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.461 [2024-04-26 15:49:43.666983] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.461 [2024-04-26 15:49:43.666992] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.461 [2024-04-26 15:49:43.667000] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.461 [2024-04-26 15:49:43.667008] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.461 [2024-04-26 15:49:43.667017] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.461 [2024-04-26 15:49:43.667025] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.461 [2024-04-26 15:49:43.667033] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.461 [2024-04-26 15:49:43.667042] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.461 [2024-04-26 15:49:43.667050] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.461 [2024-04-26 15:49:43.667058] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.461 [2024-04-26 15:49:43.667066] tcp.c:1594:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21f4640 is same with the state(5) to be set 00:31:13.461 [2024-04-26 15:49:43.667374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:46408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.461 [2024-04-26 15:49:43.667436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.461 [2024-04-26 15:49:43.667463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:126384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.461 [2024-04-26 15:49:43.667475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.461 [2024-04-26 15:49:43.667488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:64544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.461 [2024-04-26 15:49:43.667506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.461 [2024-04-26 15:49:43.667518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:21720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.461 [2024-04-26 15:49:43.667528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.461 [2024-04-26 15:49:43.667541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:51112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.461 [2024-04-26 15:49:43.667551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.461 [2024-04-26 15:49:43.667565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.461 [2024-04-26 15:49:43.667575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.461 [2024-04-26 15:49:43.667586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:72120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.461 [2024-04-26 15:49:43.667596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.461 [2024-04-26 15:49:43.667609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:57888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.461 [2024-04-26 15:49:43.667619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.461 [2024-04-26 15:49:43.667631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:107792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.461 [2024-04-26 15:49:43.667641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.461 [2024-04-26 15:49:43.667652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:102288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.461 [2024-04-26 15:49:43.667663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.461 [2024-04-26 15:49:43.667674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.461 [2024-04-26 15:49:43.667685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.461 [2024-04-26 15:49:43.667697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:31472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.461 [2024-04-26 15:49:43.667707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.461 [2024-04-26 15:49:43.667718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.461 [2024-04-26 15:49:43.667728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.461 [2024-04-26 15:49:43.667742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:87640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.461 [2024-04-26 15:49:43.667752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.461 [2024-04-26 15:49:43.667764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:103008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.461 [2024-04-26 15:49:43.667773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.461 [2024-04-26 15:49:43.667785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:95440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.461 [2024-04-26 15:49:43.667794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.461 [2024-04-26 15:49:43.667806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:124456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.461 [2024-04-26 15:49:43.667817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.461 [2024-04-26 15:49:43.667830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:76888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.461 [2024-04-26 15:49:43.667841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.461 [2024-04-26 15:49:43.667853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:13160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.461 [2024-04-26 15:49:43.667864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.461 [2024-04-26 15:49:43.667876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:53472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.461 [2024-04-26 15:49:43.667886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.461 [2024-04-26 15:49:43.667898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:84808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.461 [2024-04-26 15:49:43.667908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.461 [2024-04-26 15:49:43.667921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:108128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.462 [2024-04-26 15:49:43.667934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.462 [2024-04-26 15:49:43.667947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:44368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.462 [2024-04-26 15:49:43.667957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.462 [2024-04-26 15:49:43.667969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.462 [2024-04-26 15:49:43.667979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.462 [2024-04-26 15:49:43.667991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:106024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.462 [2024-04-26 15:49:43.668001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.462 [2024-04-26 15:49:43.668016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:72888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.462 [2024-04-26 15:49:43.668026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.462 [2024-04-26 15:49:43.668038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:46544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.462 [2024-04-26 15:49:43.668048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.462 [2024-04-26 15:49:43.668060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:109888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.462 [2024-04-26 15:49:43.668070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.462 [2024-04-26 15:49:43.668082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:9808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.462 [2024-04-26 15:49:43.668092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.462 [2024-04-26 15:49:43.668105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:39744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.462 [2024-04-26 15:49:43.668115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.462 [2024-04-26 15:49:43.668129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:47648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.462 [2024-04-26 15:49:43.668155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.462 [2024-04-26 15:49:43.668169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:14664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.462 [2024-04-26 15:49:43.668180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.462 [2024-04-26 15:49:43.668192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:109240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.462 [2024-04-26 15:49:43.668203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.462 [2024-04-26 15:49:43.668215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.462 [2024-04-26 15:49:43.668225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.462 [2024-04-26 15:49:43.668238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:28408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.462 [2024-04-26 15:49:43.668248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.462 [2024-04-26 15:49:43.668260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:97984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.462 [2024-04-26 15:49:43.668270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.462 [2024-04-26 15:49:43.668281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:82376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.462 [2024-04-26 15:49:43.668291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.462 [2024-04-26 15:49:43.668303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:41368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.462 [2024-04-26 15:49:43.668313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.462 [2024-04-26 15:49:43.668325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:24704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.462 [2024-04-26 15:49:43.668353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.462 [2024-04-26 15:49:43.668367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:100432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.462 [2024-04-26 15:49:43.668376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.462 [2024-04-26 15:49:43.668388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:16752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.462 [2024-04-26 15:49:43.668398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.462 [2024-04-26 15:49:43.668411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:29344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.462 [2024-04-26 15:49:43.668422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.462 [2024-04-26 15:49:43.668434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:8816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.462 [2024-04-26 15:49:43.668444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.462 [2024-04-26 15:49:43.668457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:76800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.462 [2024-04-26 15:49:43.668467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.462 [2024-04-26 15:49:43.668479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:94000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.462 [2024-04-26 15:49:43.668490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.462 [2024-04-26 15:49:43.668503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:32280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.462 [2024-04-26 15:49:43.668513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.462 [2024-04-26 15:49:43.668526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:0 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.462 [2024-04-26 15:49:43.668535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.462 [2024-04-26 15:49:43.668547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:31584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.462 [2024-04-26 15:49:43.668558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.462 [2024-04-26 15:49:43.668570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:126976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.462 [2024-04-26 15:49:43.668580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.462 [2024-04-26 15:49:43.668592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:65896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.462 [2024-04-26 15:49:43.668602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.462 [2024-04-26 15:49:43.668614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:3032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.462 [2024-04-26 15:49:43.668624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.462 [2024-04-26 15:49:43.668636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:37776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.462 [2024-04-26 15:49:43.668647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.462 [2024-04-26 15:49:43.668659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:55360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.462 [2024-04-26 15:49:43.668669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.462 [2024-04-26 15:49:43.668681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:36184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.462 [2024-04-26 15:49:43.668691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.462 [2024-04-26 15:49:43.668704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:20536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.462 [2024-04-26 15:49:43.668720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.462 [2024-04-26 15:49:43.668732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:109848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.462 [2024-04-26 15:49:43.668743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.462 [2024-04-26 15:49:43.668754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:97256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.462 [2024-04-26 15:49:43.668764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.462 [2024-04-26 15:49:43.668776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:120384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.462 [2024-04-26 15:49:43.668786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.462 [2024-04-26 15:49:43.668797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:57048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.462 [2024-04-26 15:49:43.668807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.462 [2024-04-26 15:49:43.668819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:60432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.462 [2024-04-26 15:49:43.668829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.462 [2024-04-26 15:49:43.668840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:24184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.462 [2024-04-26 15:49:43.668851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.463 [2024-04-26 15:49:43.668863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:91128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.463 [2024-04-26 15:49:43.668873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.463 [2024-04-26 15:49:43.668893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:98520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.463 [2024-04-26 15:49:43.668904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.463 [2024-04-26 15:49:43.668916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:118808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.463 [2024-04-26 15:49:43.668927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.463 [2024-04-26 15:49:43.668940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:58088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.463 [2024-04-26 15:49:43.668950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.463 [2024-04-26 15:49:43.668962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:106952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.463 [2024-04-26 15:49:43.668972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.463 [2024-04-26 15:49:43.668984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:42752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.463 [2024-04-26 15:49:43.668995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.463 [2024-04-26 15:49:43.669006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:9264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.463 [2024-04-26 15:49:43.669017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.463 [2024-04-26 15:49:43.669029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.463 [2024-04-26 15:49:43.669040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.463 [2024-04-26 15:49:43.669052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:77456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.463 [2024-04-26 15:49:43.669062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.463 [2024-04-26 15:49:43.669074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:14720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.463 [2024-04-26 15:49:43.669089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.463 [2024-04-26 15:49:43.669102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:93952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.463 [2024-04-26 15:49:43.669113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.463 [2024-04-26 15:49:43.669125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:72336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.463 [2024-04-26 15:49:43.669145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.463 [2024-04-26 15:49:43.669159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:68768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.463 [2024-04-26 15:49:43.669170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.463 [2024-04-26 15:49:43.669182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:18440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.463 [2024-04-26 15:49:43.669192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.463 [2024-04-26 15:49:43.669205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:66184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.463 [2024-04-26 15:49:43.669215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.463 [2024-04-26 15:49:43.669226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:65840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.463 [2024-04-26 15:49:43.669237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.463 [2024-04-26 15:49:43.669249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:70096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.463 [2024-04-26 15:49:43.669259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.463 [2024-04-26 15:49:43.669270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:63944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.463 [2024-04-26 15:49:43.669280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.463 [2024-04-26 15:49:43.669293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:52328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.463 [2024-04-26 15:49:43.669303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.463 [2024-04-26 15:49:43.669315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:36992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.463 [2024-04-26 15:49:43.669326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.463 [2024-04-26 15:49:43.669338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:14192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.463 [2024-04-26 15:49:43.669348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.463 [2024-04-26 15:49:43.669360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:23936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.463 [2024-04-26 15:49:43.669370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.463 [2024-04-26 15:49:43.669382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:1456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.463 [2024-04-26 15:49:43.669392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.463 [2024-04-26 15:49:43.669404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:71912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.463 [2024-04-26 15:49:43.669414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.463 [2024-04-26 15:49:43.669426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:68176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.463 [2024-04-26 15:49:43.669436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.463 [2024-04-26 15:49:43.669448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:31064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.463 [2024-04-26 15:49:43.669464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.463 [2024-04-26 15:49:43.669476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:25336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.463 [2024-04-26 15:49:43.669486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.463 [2024-04-26 15:49:43.669497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:100960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.463 [2024-04-26 15:49:43.669508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.463 [2024-04-26 15:49:43.669520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:121536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.463 [2024-04-26 15:49:43.669530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.463 [2024-04-26 15:49:43.669543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:72464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.463 [2024-04-26 15:49:43.669553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.463 [2024-04-26 15:49:43.669565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:107608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.463 [2024-04-26 15:49:43.669575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.463 [2024-04-26 15:49:43.669587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:58456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.463 [2024-04-26 15:49:43.669598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.463 [2024-04-26 15:49:43.669609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:109736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.463 [2024-04-26 15:49:43.669619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.463 [2024-04-26 15:49:43.669631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:42960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.463 [2024-04-26 15:49:43.669642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.463 [2024-04-26 15:49:43.669653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:58704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.463 [2024-04-26 15:49:43.669663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.463 [2024-04-26 15:49:43.669675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:50632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.463 [2024-04-26 15:49:43.669685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.463 [2024-04-26 15:49:43.669697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:42824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.463 [2024-04-26 15:49:43.669707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.463 [2024-04-26 15:49:43.669719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:76872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.463 [2024-04-26 15:49:43.669729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.463 [2024-04-26 15:49:43.669741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.463 [2024-04-26 15:49:43.669751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.463 [2024-04-26 15:49:43.669763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:97576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.464 [2024-04-26 15:49:43.669773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.464 [2024-04-26 15:49:43.669785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:99712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.464 [2024-04-26 15:49:43.669795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.464 [2024-04-26 15:49:43.669807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:27840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.464 [2024-04-26 15:49:43.669822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.464 [2024-04-26 15:49:43.669834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:48672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.464 [2024-04-26 15:49:43.669845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.464 [2024-04-26 15:49:43.669856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.464 [2024-04-26 15:49:43.669866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.464 [2024-04-26 15:49:43.669879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:42400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.464 [2024-04-26 15:49:43.669889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.464 [2024-04-26 15:49:43.669902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:54496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.464 [2024-04-26 15:49:43.669912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.464 [2024-04-26 15:49:43.669924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:26248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.464 [2024-04-26 15:49:43.669934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.464 [2024-04-26 15:49:43.669946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:62704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.464 [2024-04-26 15:49:43.669956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.464 [2024-04-26 15:49:43.669968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:72064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.464 [2024-04-26 15:49:43.669978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.464 [2024-04-26 15:49:43.669990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:26584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.464 [2024-04-26 15:49:43.670000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.464 [2024-04-26 15:49:43.670011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:28640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.464 [2024-04-26 15:49:43.670021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.464 [2024-04-26 15:49:43.670033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:94152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.464 [2024-04-26 15:49:43.670043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.464 [2024-04-26 15:49:43.670059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:54880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.464 [2024-04-26 15:49:43.670069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.464 [2024-04-26 15:49:43.670081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:69728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.464 [2024-04-26 15:49:43.670091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.464 [2024-04-26 15:49:43.670103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:44880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.464 [2024-04-26 15:49:43.670113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.464 [2024-04-26 15:49:43.670125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:95568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.464 [2024-04-26 15:49:43.670144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.464 [2024-04-26 15:49:43.670158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:50488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.464 [2024-04-26 15:49:43.670169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.464 [2024-04-26 15:49:43.670181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:45232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.464 [2024-04-26 15:49:43.670197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.464 [2024-04-26 15:49:43.670209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:30040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.464 [2024-04-26 15:49:43.670219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.464 [2024-04-26 15:49:43.670231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:127368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.464 [2024-04-26 15:49:43.670241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.464 [2024-04-26 15:49:43.670254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:102968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.464 [2024-04-26 15:49:43.670264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.464 [2024-04-26 15:49:43.670285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:111224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.464 [2024-04-26 15:49:43.670295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.464 [2024-04-26 15:49:43.670307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:97816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.464 [2024-04-26 15:49:43.670317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.464 [2024-04-26 15:49:43.670329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:8176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.464 [2024-04-26 15:49:43.670339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.464 [2024-04-26 15:49:43.670351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:89424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.464 [2024-04-26 15:49:43.670361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.464 [2024-04-26 15:49:43.670373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:22256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:13.464 [2024-04-26 15:49:43.670384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.464 [2024-04-26 15:49:43.670394] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x199bc10 is same with the state(5) to be set 00:31:13.464 [2024-04-26 15:49:43.670407] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:13.464 [2024-04-26 15:49:43.670415] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:13.464 [2024-04-26 15:49:43.670425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82200 len:8 PRP1 0x0 PRP2 0x0 00:31:13.464 [2024-04-26 15:49:43.670435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:13.464 [2024-04-26 15:49:43.670503] bdev_nvme.c:1600:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x199bc10 was disconnected and freed. reset controller. 00:31:13.464 [2024-04-26 15:49:43.670799] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.464 [2024-04-26 15:49:43.670876] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x192cdc0 (9): Bad file descriptor 00:31:13.464 [2024-04-26 15:49:43.671001] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.464 [2024-04-26 15:49:43.671051] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:13.464 [2024-04-26 15:49:43.671068] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x192cdc0 with addr=10.0.0.2, port=4420 00:31:13.464 [2024-04-26 15:49:43.671079] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x192cdc0 is same with the state(5) to be set 00:31:13.464 [2024-04-26 15:49:43.671097] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x192cdc0 (9): Bad file descriptor 00:31:13.464 [2024-04-26 15:49:43.671127] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:13.464 [2024-04-26 15:49:43.671155] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:13.465 [2024-04-26 15:49:43.671175] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:13.465 [2024-04-26 15:49:43.671196] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:13.465 [2024-04-26 15:49:43.671208] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:13.465 15:49:43 -- host/timeout.sh@128 -- # wait 89133 00:31:15.996 [2024-04-26 15:49:45.671521] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.996 [2024-04-26 15:49:45.671643] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:15.996 [2024-04-26 15:49:45.671663] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x192cdc0 with addr=10.0.0.2, port=4420 00:31:15.996 [2024-04-26 15:49:45.671681] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x192cdc0 is same with the state(5) to be set 00:31:15.996 [2024-04-26 15:49:45.671714] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x192cdc0 (9): Bad file descriptor 00:31:15.996 [2024-04-26 15:49:45.671752] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:15.996 [2024-04-26 15:49:45.671766] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:15.996 [2024-04-26 15:49:45.671779] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:15.996 [2024-04-26 15:49:45.671812] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:15.996 [2024-04-26 15:49:45.671826] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:17.893 [2024-04-26 15:49:47.672089] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.893 [2024-04-26 15:49:47.672225] posix.c:1037:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:17.893 [2024-04-26 15:49:47.672246] nvme_tcp.c:2371:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x192cdc0 with addr=10.0.0.2, port=4420 00:31:17.893 [2024-04-26 15:49:47.672263] nvme_tcp.c: 322:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x192cdc0 is same with the state(5) to be set 00:31:17.893 [2024-04-26 15:49:47.672296] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x192cdc0 (9): Bad file descriptor 00:31:17.893 [2024-04-26 15:49:47.672319] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:31:17.893 [2024-04-26 15:49:47.672330] nvme_ctrlr.c:1749:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:31:17.893 [2024-04-26 15:49:47.672355] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:17.893 [2024-04-26 15:49:47.672391] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:17.893 [2024-04-26 15:49:47.672415] nvme_ctrlr.c:1651:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:19.791 [2024-04-26 15:49:49.672616] bdev_nvme.c:2051:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:20.724 00:31:20.724 Latency(us) 00:31:20.724 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:20.724 Job: NVMe0n1 (Core Mask 0x4, workload: randread, depth: 128, IO size: 4096) 00:31:20.724 NVMe0n1 : 8.17 2614.15 10.21 15.67 0.00 48639.41 2398.02 7046430.72 00:31:20.724 =================================================================================================================== 00:31:20.724 Total : 2614.15 10.21 15.67 0.00 48639.41 2398.02 7046430.72 00:31:20.724 0 00:31:20.724 15:49:50 -- host/timeout.sh@129 -- # cat /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:31:20.724 Attaching 5 probes... 00:31:20.724 1355.085582: reset bdev controller NVMe0 00:31:20.725 1355.216214: reconnect bdev controller NVMe0 00:31:20.725 3355.611779: reconnect delay bdev controller NVMe0 00:31:20.725 3355.642815: reconnect bdev controller NVMe0 00:31:20.725 5356.167630: reconnect delay bdev controller NVMe0 00:31:20.725 5356.212827: reconnect bdev controller NVMe0 00:31:20.725 7356.846017: reconnect delay bdev controller NVMe0 00:31:20.725 7356.882997: reconnect bdev controller NVMe0 00:31:20.725 15:49:50 -- host/timeout.sh@132 -- # grep -c 'reconnect delay bdev controller NVMe0' 00:31:20.725 15:49:50 -- host/timeout.sh@132 -- # (( 3 <= 2 )) 00:31:20.725 15:49:50 -- host/timeout.sh@136 -- # kill 89084 00:31:20.725 15:49:50 -- host/timeout.sh@137 -- # rm -f /home/vagrant/spdk_repo/spdk/test/nvmf/host/trace.txt 00:31:20.725 15:49:50 -- host/timeout.sh@139 -- # killprocess 89057 00:31:20.725 15:49:50 -- common/autotest_common.sh@936 -- # '[' -z 89057 ']' 00:31:20.725 15:49:50 -- common/autotest_common.sh@940 -- # kill -0 89057 00:31:20.725 15:49:50 -- common/autotest_common.sh@941 -- # uname 00:31:20.725 15:49:50 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:20.725 15:49:50 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89057 00:31:20.725 killing process with pid 89057 00:31:20.725 Received shutdown signal, test time was about 8.231222 seconds 00:31:20.725 00:31:20.725 Latency(us) 00:31:20.725 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:20.725 =================================================================================================================== 00:31:20.725 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:20.725 15:49:50 -- common/autotest_common.sh@942 -- # process_name=reactor_2 00:31:20.725 15:49:50 -- common/autotest_common.sh@946 -- # '[' reactor_2 = sudo ']' 00:31:20.725 15:49:50 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89057' 00:31:20.725 15:49:50 -- common/autotest_common.sh@955 -- # kill 89057 00:31:20.725 15:49:50 -- common/autotest_common.sh@960 -- # wait 89057 00:31:20.982 15:49:51 -- host/timeout.sh@141 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:21.241 15:49:51 -- host/timeout.sh@143 -- # trap - SIGINT SIGTERM EXIT 00:31:21.241 15:49:51 -- host/timeout.sh@145 -- # nvmftestfini 00:31:21.241 15:49:51 -- nvmf/common.sh@477 -- # nvmfcleanup 00:31:21.241 15:49:51 -- nvmf/common.sh@117 -- # sync 00:31:21.241 15:49:51 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:21.241 15:49:51 -- nvmf/common.sh@120 -- # set +e 00:31:21.241 15:49:51 -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:21.241 15:49:51 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:21.241 rmmod nvme_tcp 00:31:21.241 rmmod nvme_fabrics 00:31:21.241 rmmod nvme_keyring 00:31:21.241 15:49:51 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:21.241 15:49:51 -- nvmf/common.sh@124 -- # set -e 00:31:21.241 15:49:51 -- nvmf/common.sh@125 -- # return 0 00:31:21.241 15:49:51 -- nvmf/common.sh@478 -- # '[' -n 88463 ']' 00:31:21.241 15:49:51 -- nvmf/common.sh@479 -- # killprocess 88463 00:31:21.241 15:49:51 -- common/autotest_common.sh@936 -- # '[' -z 88463 ']' 00:31:21.241 15:49:51 -- common/autotest_common.sh@940 -- # kill -0 88463 00:31:21.241 15:49:51 -- common/autotest_common.sh@941 -- # uname 00:31:21.241 15:49:51 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:21.241 15:49:51 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 88463 00:31:21.241 15:49:51 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:31:21.241 15:49:51 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:31:21.241 killing process with pid 88463 00:31:21.241 15:49:51 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 88463' 00:31:21.241 15:49:51 -- common/autotest_common.sh@955 -- # kill 88463 00:31:21.241 15:49:51 -- common/autotest_common.sh@960 -- # wait 88463 00:31:21.502 15:49:51 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:31:21.503 15:49:51 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:31:21.503 15:49:51 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:31:21.503 15:49:51 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:21.503 15:49:51 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:21.503 15:49:51 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:21.503 15:49:51 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:21.503 15:49:51 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:21.760 15:49:51 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:31:21.760 00:31:21.760 real 0m47.984s 00:31:21.760 user 2m21.452s 00:31:21.760 sys 0m5.270s 00:31:21.760 15:49:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:21.760 15:49:51 -- common/autotest_common.sh@10 -- # set +x 00:31:21.760 ************************************ 00:31:21.760 END TEST nvmf_timeout 00:31:21.760 ************************************ 00:31:21.760 15:49:51 -- nvmf/nvmf.sh@118 -- # [[ virt == phy ]] 00:31:21.760 15:49:51 -- nvmf/nvmf.sh@123 -- # timing_exit host 00:31:21.760 15:49:51 -- common/autotest_common.sh@716 -- # xtrace_disable 00:31:21.760 15:49:51 -- common/autotest_common.sh@10 -- # set +x 00:31:21.760 15:49:51 -- nvmf/nvmf.sh@125 -- # trap - SIGINT SIGTERM EXIT 00:31:21.760 00:31:21.760 real 12m13.288s 00:31:21.760 user 32m14.445s 00:31:21.760 sys 2m51.281s 00:31:21.760 15:49:51 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:21.760 ************************************ 00:31:21.760 15:49:51 -- common/autotest_common.sh@10 -- # set +x 00:31:21.760 END TEST nvmf_tcp 00:31:21.760 ************************************ 00:31:21.760 15:49:51 -- spdk/autotest.sh@286 -- # [[ 0 -eq 0 ]] 00:31:21.760 15:49:51 -- spdk/autotest.sh@287 -- # run_test spdkcli_nvmf_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:31:21.760 15:49:51 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:31:21.760 15:49:51 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:21.760 15:49:51 -- common/autotest_common.sh@10 -- # set +x 00:31:21.760 ************************************ 00:31:21.760 START TEST spdkcli_nvmf_tcp 00:31:21.760 ************************************ 00:31:21.760 15:49:51 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:31:22.018 * Looking for test storage... 00:31:22.018 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:31:22.018 15:49:52 -- spdkcli/nvmf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:31:22.018 15:49:52 -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:31:22.018 15:49:52 -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:31:22.018 15:49:52 -- spdkcli/nvmf.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:22.018 15:49:52 -- nvmf/common.sh@7 -- # uname -s 00:31:22.018 15:49:52 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:22.019 15:49:52 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:22.019 15:49:52 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:22.019 15:49:52 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:22.019 15:49:52 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:22.019 15:49:52 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:22.019 15:49:52 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:22.019 15:49:52 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:22.019 15:49:52 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:22.019 15:49:52 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:22.019 15:49:52 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:31:22.019 15:49:52 -- nvmf/common.sh@18 -- # NVME_HOSTID=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:31:22.019 15:49:52 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:22.019 15:49:52 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:22.019 15:49:52 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:22.019 15:49:52 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:22.019 15:49:52 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:22.019 15:49:52 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:22.019 15:49:52 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:22.019 15:49:52 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:22.019 15:49:52 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.019 15:49:52 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.019 15:49:52 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.019 15:49:52 -- paths/export.sh@5 -- # export PATH 00:31:22.019 15:49:52 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:22.019 15:49:52 -- nvmf/common.sh@47 -- # : 0 00:31:22.019 15:49:52 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:22.019 15:49:52 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:22.019 15:49:52 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:22.019 15:49:52 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:22.019 15:49:52 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:22.019 15:49:52 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:22.019 15:49:52 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:22.019 15:49:52 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:22.019 15:49:52 -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:31:22.019 15:49:52 -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:31:22.019 15:49:52 -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:31:22.019 15:49:52 -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:31:22.019 15:49:52 -- common/autotest_common.sh@710 -- # xtrace_disable 00:31:22.019 15:49:52 -- common/autotest_common.sh@10 -- # set +x 00:31:22.019 15:49:52 -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:31:22.019 15:49:52 -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=89362 00:31:22.019 15:49:52 -- spdkcli/common.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:31:22.019 15:49:52 -- spdkcli/common.sh@34 -- # waitforlisten 89362 00:31:22.019 15:49:52 -- common/autotest_common.sh@817 -- # '[' -z 89362 ']' 00:31:22.019 15:49:52 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:22.019 15:49:52 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:22.019 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:22.019 15:49:52 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:22.019 15:49:52 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:22.019 15:49:52 -- common/autotest_common.sh@10 -- # set +x 00:31:22.019 [2024-04-26 15:49:52.164760] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:31:22.019 [2024-04-26 15:49:52.164872] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89362 ] 00:31:22.019 [2024-04-26 15:49:52.303391] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:22.276 [2024-04-26 15:49:52.421384] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:22.276 [2024-04-26 15:49:52.421395] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:22.841 15:49:53 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:22.841 15:49:53 -- common/autotest_common.sh@850 -- # return 0 00:31:22.841 15:49:53 -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:31:22.841 15:49:53 -- common/autotest_common.sh@716 -- # xtrace_disable 00:31:22.841 15:49:53 -- common/autotest_common.sh@10 -- # set +x 00:31:23.099 15:49:53 -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:31:23.099 15:49:53 -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:31:23.099 15:49:53 -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:31:23.099 15:49:53 -- common/autotest_common.sh@710 -- # xtrace_disable 00:31:23.099 15:49:53 -- common/autotest_common.sh@10 -- # set +x 00:31:23.099 15:49:53 -- spdkcli/nvmf.sh@65 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:31:23.099 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:31:23.099 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:31:23.099 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:31:23.099 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:31:23.099 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:31:23.099 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:31:23.099 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:31:23.099 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:31:23.099 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:31:23.099 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:23.099 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:23.099 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:31:23.099 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:23.099 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:23.099 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:31:23.099 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:31:23.099 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:31:23.099 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:31:23.099 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:23.099 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:31:23.099 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:31:23.099 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:31:23.099 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:31:23.099 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:31:23.099 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:31:23.099 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:31:23.099 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:31:23.099 ' 00:31:23.357 [2024-04-26 15:49:53.622125] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:31:25.884 [2024-04-26 15:49:55.888744] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:27.257 [2024-04-26 15:49:57.165836] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:31:29.791 [2024-04-26 15:49:59.531650] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:31:31.691 [2024-04-26 15:50:01.586326] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:31:33.072 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:31:33.072 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:31:33.072 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:31:33.072 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:31:33.072 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:31:33.072 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:31:33.072 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:31:33.072 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:31:33.072 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:31:33.072 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:31:33.072 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:33.072 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:33.072 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:31:33.072 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:33.072 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:33.072 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:31:33.072 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:31:33.072 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:31:33.072 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:31:33.072 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:33.072 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:31:33.072 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:31:33.072 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:31:33.072 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:31:33.072 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:31:33.072 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:31:33.072 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:31:33.072 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:31:33.072 15:50:03 -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:31:33.072 15:50:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:31:33.072 15:50:03 -- common/autotest_common.sh@10 -- # set +x 00:31:33.072 15:50:03 -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:31:33.072 15:50:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:31:33.072 15:50:03 -- common/autotest_common.sh@10 -- # set +x 00:31:33.072 15:50:03 -- spdkcli/nvmf.sh@69 -- # check_match 00:31:33.072 15:50:03 -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /nvmf 00:31:33.638 15:50:03 -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:31:33.638 15:50:03 -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:31:33.638 15:50:03 -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:31:33.638 15:50:03 -- common/autotest_common.sh@716 -- # xtrace_disable 00:31:33.638 15:50:03 -- common/autotest_common.sh@10 -- # set +x 00:31:33.638 15:50:03 -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:31:33.638 15:50:03 -- common/autotest_common.sh@710 -- # xtrace_disable 00:31:33.638 15:50:03 -- common/autotest_common.sh@10 -- # set +x 00:31:33.638 15:50:03 -- spdkcli/nvmf.sh@87 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:31:33.638 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:31:33.638 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:31:33.638 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:31:33.638 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:31:33.638 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:31:33.638 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:31:33.638 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:31:33.638 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:31:33.638 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:31:33.638 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:31:33.638 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:31:33.638 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:31:33.638 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:31:33.638 ' 00:31:40.189 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:31:40.189 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:31:40.189 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:40.189 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:31:40.189 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:31:40.189 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:31:40.189 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:31:40.189 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:31:40.189 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:31:40.189 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:31:40.189 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:31:40.189 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:31:40.189 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:31:40.189 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:31:40.189 15:50:09 -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:31:40.189 15:50:09 -- common/autotest_common.sh@716 -- # xtrace_disable 00:31:40.189 15:50:09 -- common/autotest_common.sh@10 -- # set +x 00:31:40.189 15:50:09 -- spdkcli/nvmf.sh@90 -- # killprocess 89362 00:31:40.189 15:50:09 -- common/autotest_common.sh@936 -- # '[' -z 89362 ']' 00:31:40.189 15:50:09 -- common/autotest_common.sh@940 -- # kill -0 89362 00:31:40.189 15:50:09 -- common/autotest_common.sh@941 -- # uname 00:31:40.189 15:50:09 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:40.189 15:50:09 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89362 00:31:40.190 15:50:09 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:31:40.190 15:50:09 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:31:40.190 15:50:09 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89362' 00:31:40.190 killing process with pid 89362 00:31:40.190 15:50:09 -- common/autotest_common.sh@955 -- # kill 89362 00:31:40.190 [2024-04-26 15:50:09.453503] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:31:40.190 15:50:09 -- common/autotest_common.sh@960 -- # wait 89362 00:31:40.190 15:50:09 -- spdkcli/nvmf.sh@1 -- # cleanup 00:31:40.190 15:50:09 -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:31:40.190 15:50:09 -- spdkcli/common.sh@13 -- # '[' -n 89362 ']' 00:31:40.190 15:50:09 -- spdkcli/common.sh@14 -- # killprocess 89362 00:31:40.190 15:50:09 -- common/autotest_common.sh@936 -- # '[' -z 89362 ']' 00:31:40.190 15:50:09 -- common/autotest_common.sh@940 -- # kill -0 89362 00:31:40.190 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (89362) - No such process 00:31:40.190 Process with pid 89362 is not found 00:31:40.190 15:50:09 -- common/autotest_common.sh@963 -- # echo 'Process with pid 89362 is not found' 00:31:40.190 15:50:09 -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:31:40.190 15:50:09 -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:31:40.190 15:50:09 -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_nvmf.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:31:40.190 00:31:40.190 real 0m17.818s 00:31:40.190 user 0m38.455s 00:31:40.190 sys 0m1.011s 00:31:40.190 15:50:09 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:40.190 15:50:09 -- common/autotest_common.sh@10 -- # set +x 00:31:40.190 ************************************ 00:31:40.190 END TEST spdkcli_nvmf_tcp 00:31:40.190 ************************************ 00:31:40.190 15:50:09 -- spdk/autotest.sh@288 -- # run_test nvmf_identify_passthru /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:31:40.190 15:50:09 -- common/autotest_common.sh@1087 -- # '[' 3 -le 1 ']' 00:31:40.190 15:50:09 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:40.190 15:50:09 -- common/autotest_common.sh@10 -- # set +x 00:31:40.190 ************************************ 00:31:40.190 START TEST nvmf_identify_passthru 00:31:40.190 ************************************ 00:31:40.190 15:50:09 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:31:40.190 * Looking for test storage... 00:31:40.190 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:31:40.190 15:50:10 -- target/identify_passthru.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:40.190 15:50:10 -- nvmf/common.sh@7 -- # uname -s 00:31:40.190 15:50:10 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:40.190 15:50:10 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:40.190 15:50:10 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:40.190 15:50:10 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:40.190 15:50:10 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:40.190 15:50:10 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:40.190 15:50:10 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:40.190 15:50:10 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:40.190 15:50:10 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:40.190 15:50:10 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:40.190 15:50:10 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:31:40.190 15:50:10 -- nvmf/common.sh@18 -- # NVME_HOSTID=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:31:40.190 15:50:10 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:40.190 15:50:10 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:40.190 15:50:10 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:40.190 15:50:10 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:40.190 15:50:10 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:40.190 15:50:10 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:40.190 15:50:10 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:40.190 15:50:10 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:40.190 15:50:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.190 15:50:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.190 15:50:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.190 15:50:10 -- paths/export.sh@5 -- # export PATH 00:31:40.190 15:50:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.190 15:50:10 -- nvmf/common.sh@47 -- # : 0 00:31:40.190 15:50:10 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:40.190 15:50:10 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:40.190 15:50:10 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:40.190 15:50:10 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:40.190 15:50:10 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:40.190 15:50:10 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:40.190 15:50:10 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:40.190 15:50:10 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:40.190 15:50:10 -- target/identify_passthru.sh@10 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:40.190 15:50:10 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:40.190 15:50:10 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:40.190 15:50:10 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:40.190 15:50:10 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.190 15:50:10 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.190 15:50:10 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.190 15:50:10 -- paths/export.sh@5 -- # export PATH 00:31:40.190 15:50:10 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:40.190 15:50:10 -- target/identify_passthru.sh@12 -- # nvmftestinit 00:31:40.190 15:50:10 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:31:40.190 15:50:10 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:40.190 15:50:10 -- nvmf/common.sh@437 -- # prepare_net_devs 00:31:40.190 15:50:10 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:31:40.190 15:50:10 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:31:40.190 15:50:10 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:40.190 15:50:10 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:40.190 15:50:10 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:40.190 15:50:10 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:31:40.190 15:50:10 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:31:40.190 15:50:10 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:31:40.190 15:50:10 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:31:40.190 15:50:10 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:31:40.190 15:50:10 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:31:40.190 15:50:10 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:40.190 15:50:10 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:40.190 15:50:10 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:31:40.190 15:50:10 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:31:40.190 15:50:10 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:31:40.190 15:50:10 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:31:40.190 15:50:10 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:31:40.190 15:50:10 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:40.190 15:50:10 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:31:40.190 15:50:10 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:31:40.190 15:50:10 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:31:40.190 15:50:10 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:31:40.190 15:50:10 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:31:40.190 15:50:10 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:31:40.190 Cannot find device "nvmf_tgt_br" 00:31:40.190 15:50:10 -- nvmf/common.sh@155 -- # true 00:31:40.190 15:50:10 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:31:40.190 Cannot find device "nvmf_tgt_br2" 00:31:40.190 15:50:10 -- nvmf/common.sh@156 -- # true 00:31:40.190 15:50:10 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:31:40.190 15:50:10 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:31:40.190 Cannot find device "nvmf_tgt_br" 00:31:40.190 15:50:10 -- nvmf/common.sh@158 -- # true 00:31:40.190 15:50:10 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:31:40.191 Cannot find device "nvmf_tgt_br2" 00:31:40.191 15:50:10 -- nvmf/common.sh@159 -- # true 00:31:40.191 15:50:10 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:31:40.191 15:50:10 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:31:40.191 15:50:10 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:40.191 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:40.191 15:50:10 -- nvmf/common.sh@162 -- # true 00:31:40.191 15:50:10 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:40.191 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:40.191 15:50:10 -- nvmf/common.sh@163 -- # true 00:31:40.191 15:50:10 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:31:40.191 15:50:10 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:31:40.191 15:50:10 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:31:40.191 15:50:10 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:31:40.191 15:50:10 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:31:40.191 15:50:10 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:31:40.191 15:50:10 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:31:40.191 15:50:10 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:31:40.191 15:50:10 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:31:40.191 15:50:10 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:31:40.191 15:50:10 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:31:40.191 15:50:10 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:31:40.191 15:50:10 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:31:40.191 15:50:10 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:31:40.191 15:50:10 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:31:40.191 15:50:10 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:31:40.191 15:50:10 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:31:40.191 15:50:10 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:31:40.191 15:50:10 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:31:40.191 15:50:10 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:31:40.191 15:50:10 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:31:40.191 15:50:10 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:31:40.191 15:50:10 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:31:40.191 15:50:10 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:31:40.191 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:40.191 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.066 ms 00:31:40.191 00:31:40.191 --- 10.0.0.2 ping statistics --- 00:31:40.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:40.191 rtt min/avg/max/mdev = 0.066/0.066/0.066/0.000 ms 00:31:40.191 15:50:10 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:31:40.191 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:31:40.191 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.061 ms 00:31:40.191 00:31:40.191 --- 10.0.0.3 ping statistics --- 00:31:40.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:40.191 rtt min/avg/max/mdev = 0.061/0.061/0.061/0.000 ms 00:31:40.191 15:50:10 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:31:40.191 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:40.191 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.027 ms 00:31:40.191 00:31:40.191 --- 10.0.0.1 ping statistics --- 00:31:40.191 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:40.191 rtt min/avg/max/mdev = 0.027/0.027/0.027/0.000 ms 00:31:40.191 15:50:10 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:40.191 15:50:10 -- nvmf/common.sh@422 -- # return 0 00:31:40.191 15:50:10 -- nvmf/common.sh@439 -- # '[' '' == iso ']' 00:31:40.191 15:50:10 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:40.191 15:50:10 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:31:40.191 15:50:10 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:31:40.191 15:50:10 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:40.191 15:50:10 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:31:40.191 15:50:10 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:31:40.191 15:50:10 -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:31:40.191 15:50:10 -- common/autotest_common.sh@710 -- # xtrace_disable 00:31:40.191 15:50:10 -- common/autotest_common.sh@10 -- # set +x 00:31:40.191 15:50:10 -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:31:40.191 15:50:10 -- common/autotest_common.sh@1510 -- # bdfs=() 00:31:40.191 15:50:10 -- common/autotest_common.sh@1510 -- # local bdfs 00:31:40.191 15:50:10 -- common/autotest_common.sh@1511 -- # bdfs=($(get_nvme_bdfs)) 00:31:40.191 15:50:10 -- common/autotest_common.sh@1511 -- # get_nvme_bdfs 00:31:40.191 15:50:10 -- common/autotest_common.sh@1499 -- # bdfs=() 00:31:40.191 15:50:10 -- common/autotest_common.sh@1499 -- # local bdfs 00:31:40.191 15:50:10 -- common/autotest_common.sh@1500 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:31:40.191 15:50:10 -- common/autotest_common.sh@1500 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:31:40.191 15:50:10 -- common/autotest_common.sh@1500 -- # jq -r '.config[].params.traddr' 00:31:40.191 15:50:10 -- common/autotest_common.sh@1501 -- # (( 2 == 0 )) 00:31:40.191 15:50:10 -- common/autotest_common.sh@1505 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:31:40.191 15:50:10 -- common/autotest_common.sh@1513 -- # echo 0000:00:10.0 00:31:40.191 15:50:10 -- target/identify_passthru.sh@16 -- # bdf=0000:00:10.0 00:31:40.191 15:50:10 -- target/identify_passthru.sh@17 -- # '[' -z 0000:00:10.0 ']' 00:31:40.191 15:50:10 -- target/identify_passthru.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:31:40.191 15:50:10 -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:31:40.191 15:50:10 -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:31:40.449 15:50:10 -- target/identify_passthru.sh@23 -- # nvme_serial_number=12340 00:31:40.449 15:50:10 -- target/identify_passthru.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:31:40.449 15:50:10 -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:31:40.449 15:50:10 -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:31:40.709 15:50:10 -- target/identify_passthru.sh@24 -- # nvme_model_number=QEMU 00:31:40.709 15:50:10 -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:31:40.709 15:50:10 -- common/autotest_common.sh@716 -- # xtrace_disable 00:31:40.710 15:50:10 -- common/autotest_common.sh@10 -- # set +x 00:31:40.710 15:50:10 -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:31:40.710 15:50:10 -- common/autotest_common.sh@710 -- # xtrace_disable 00:31:40.710 15:50:10 -- common/autotest_common.sh@10 -- # set +x 00:31:40.710 15:50:10 -- target/identify_passthru.sh@31 -- # nvmfpid=89862 00:31:40.710 15:50:10 -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:31:40.710 15:50:10 -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:40.710 15:50:10 -- target/identify_passthru.sh@35 -- # waitforlisten 89862 00:31:40.710 15:50:10 -- common/autotest_common.sh@817 -- # '[' -z 89862 ']' 00:31:40.710 15:50:10 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:40.710 15:50:10 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:40.710 15:50:10 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:40.710 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:40.710 15:50:10 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:40.710 15:50:10 -- common/autotest_common.sh@10 -- # set +x 00:31:40.710 [2024-04-26 15:50:10.963693] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:31:40.710 [2024-04-26 15:50:10.963814] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:40.971 [2024-04-26 15:50:11.110887] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:40.971 [2024-04-26 15:50:11.239231] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:40.971 [2024-04-26 15:50:11.239293] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:40.971 [2024-04-26 15:50:11.239308] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:40.971 [2024-04-26 15:50:11.239320] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:40.971 [2024-04-26 15:50:11.239335] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:40.971 [2024-04-26 15:50:11.239492] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:31:40.971 [2024-04-26 15:50:11.240194] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:31:40.971 [2024-04-26 15:50:11.240274] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:40.971 [2024-04-26 15:50:11.240273] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:31:41.904 15:50:11 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:41.904 15:50:11 -- common/autotest_common.sh@850 -- # return 0 00:31:41.904 15:50:11 -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:31:41.904 15:50:11 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:41.904 15:50:11 -- common/autotest_common.sh@10 -- # set +x 00:31:41.904 15:50:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:41.904 15:50:12 -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:31:41.904 15:50:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:41.904 15:50:12 -- common/autotest_common.sh@10 -- # set +x 00:31:41.904 [2024-04-26 15:50:12.101551] nvmf_tgt.c: 453:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:31:41.904 15:50:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:41.904 15:50:12 -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:41.904 15:50:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:41.904 15:50:12 -- common/autotest_common.sh@10 -- # set +x 00:31:41.904 [2024-04-26 15:50:12.115592] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:41.904 15:50:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:41.904 15:50:12 -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:31:41.904 15:50:12 -- common/autotest_common.sh@716 -- # xtrace_disable 00:31:41.904 15:50:12 -- common/autotest_common.sh@10 -- # set +x 00:31:41.904 15:50:12 -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:31:41.904 15:50:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:41.904 15:50:12 -- common/autotest_common.sh@10 -- # set +x 00:31:42.163 Nvme0n1 00:31:42.163 15:50:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:42.163 15:50:12 -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:31:42.163 15:50:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:42.163 15:50:12 -- common/autotest_common.sh@10 -- # set +x 00:31:42.163 15:50:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:42.163 15:50:12 -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:31:42.163 15:50:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:42.163 15:50:12 -- common/autotest_common.sh@10 -- # set +x 00:31:42.163 15:50:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:42.163 15:50:12 -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:42.163 15:50:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:42.163 15:50:12 -- common/autotest_common.sh@10 -- # set +x 00:31:42.163 [2024-04-26 15:50:12.261103] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:42.163 15:50:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:42.163 15:50:12 -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:31:42.163 15:50:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:42.163 15:50:12 -- common/autotest_common.sh@10 -- # set +x 00:31:42.163 [2024-04-26 15:50:12.272621] nvmf_rpc.c: 275:rpc_nvmf_get_subsystems: *WARNING*: rpc_nvmf_get_subsystems: deprecated feature listener.transport is deprecated in favor of trtype to be removed in v24.05 00:31:42.163 [ 00:31:42.163 { 00:31:42.163 "allow_any_host": true, 00:31:42.163 "hosts": [], 00:31:42.163 "listen_addresses": [], 00:31:42.163 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:31:42.163 "subtype": "Discovery" 00:31:42.163 }, 00:31:42.163 { 00:31:42.163 "allow_any_host": true, 00:31:42.163 "hosts": [], 00:31:42.163 "listen_addresses": [ 00:31:42.163 { 00:31:42.163 "adrfam": "IPv4", 00:31:42.163 "traddr": "10.0.0.2", 00:31:42.163 "transport": "TCP", 00:31:42.163 "trsvcid": "4420", 00:31:42.163 "trtype": "TCP" 00:31:42.163 } 00:31:42.163 ], 00:31:42.163 "max_cntlid": 65519, 00:31:42.163 "max_namespaces": 1, 00:31:42.163 "min_cntlid": 1, 00:31:42.163 "model_number": "SPDK bdev Controller", 00:31:42.163 "namespaces": [ 00:31:42.163 { 00:31:42.163 "bdev_name": "Nvme0n1", 00:31:42.163 "name": "Nvme0n1", 00:31:42.163 "nguid": "0991D6932BF44A7CAFA26835996568DC", 00:31:42.163 "nsid": 1, 00:31:42.163 "uuid": "0991d693-2bf4-4a7c-afa2-6835996568dc" 00:31:42.163 } 00:31:42.163 ], 00:31:42.163 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:31:42.163 "serial_number": "SPDK00000000000001", 00:31:42.163 "subtype": "NVMe" 00:31:42.163 } 00:31:42.163 ] 00:31:42.163 15:50:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:42.163 15:50:12 -- target/identify_passthru.sh@54 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:42.163 15:50:12 -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:31:42.163 15:50:12 -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:31:42.420 15:50:12 -- target/identify_passthru.sh@54 -- # nvmf_serial_number=12340 00:31:42.420 15:50:12 -- target/identify_passthru.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:31:42.420 15:50:12 -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:31:42.420 15:50:12 -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:31:42.677 15:50:12 -- target/identify_passthru.sh@61 -- # nvmf_model_number=QEMU 00:31:42.677 15:50:12 -- target/identify_passthru.sh@63 -- # '[' 12340 '!=' 12340 ']' 00:31:42.677 15:50:12 -- target/identify_passthru.sh@68 -- # '[' QEMU '!=' QEMU ']' 00:31:42.677 15:50:12 -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:42.677 15:50:12 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:42.677 15:50:12 -- common/autotest_common.sh@10 -- # set +x 00:31:42.677 15:50:12 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:42.677 15:50:12 -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:31:42.677 15:50:12 -- target/identify_passthru.sh@77 -- # nvmftestfini 00:31:42.677 15:50:12 -- nvmf/common.sh@477 -- # nvmfcleanup 00:31:42.677 15:50:12 -- nvmf/common.sh@117 -- # sync 00:31:42.935 15:50:13 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:42.935 15:50:13 -- nvmf/common.sh@120 -- # set +e 00:31:42.935 15:50:13 -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:42.935 15:50:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:42.935 rmmod nvme_tcp 00:31:42.935 rmmod nvme_fabrics 00:31:42.935 rmmod nvme_keyring 00:31:42.935 15:50:13 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:42.935 15:50:13 -- nvmf/common.sh@124 -- # set -e 00:31:42.935 15:50:13 -- nvmf/common.sh@125 -- # return 0 00:31:42.935 15:50:13 -- nvmf/common.sh@478 -- # '[' -n 89862 ']' 00:31:42.935 15:50:13 -- nvmf/common.sh@479 -- # killprocess 89862 00:31:42.935 15:50:13 -- common/autotest_common.sh@936 -- # '[' -z 89862 ']' 00:31:42.935 15:50:13 -- common/autotest_common.sh@940 -- # kill -0 89862 00:31:42.935 15:50:13 -- common/autotest_common.sh@941 -- # uname 00:31:42.935 15:50:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:31:42.935 15:50:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 89862 00:31:42.935 killing process with pid 89862 00:31:42.935 15:50:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:31:42.935 15:50:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:31:42.935 15:50:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 89862' 00:31:42.935 15:50:13 -- common/autotest_common.sh@955 -- # kill 89862 00:31:42.935 [2024-04-26 15:50:13.168555] app.c: 937:log_deprecation_hits: *WARNING*: rpc_nvmf_get_subsystems: deprecation 'listener.transport is deprecated in favor of trtype' scheduled for removal in v24.05 hit 1 times 00:31:42.935 15:50:13 -- common/autotest_common.sh@960 -- # wait 89862 00:31:43.194 15:50:13 -- nvmf/common.sh@481 -- # '[' '' == iso ']' 00:31:43.194 15:50:13 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:31:43.194 15:50:13 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:31:43.194 15:50:13 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:43.194 15:50:13 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:43.194 15:50:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:43.194 15:50:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:43.194 15:50:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:43.194 15:50:13 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:31:43.194 00:31:43.194 real 0m3.524s 00:31:43.194 user 0m9.071s 00:31:43.194 sys 0m0.875s 00:31:43.194 ************************************ 00:31:43.194 END TEST nvmf_identify_passthru 00:31:43.194 ************************************ 00:31:43.194 15:50:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:43.194 15:50:13 -- common/autotest_common.sh@10 -- # set +x 00:31:43.453 15:50:13 -- spdk/autotest.sh@290 -- # run_test nvmf_dif /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:31:43.453 15:50:13 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:43.453 15:50:13 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:43.453 15:50:13 -- common/autotest_common.sh@10 -- # set +x 00:31:43.453 ************************************ 00:31:43.453 START TEST nvmf_dif 00:31:43.453 ************************************ 00:31:43.453 15:50:13 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/dif.sh 00:31:43.453 * Looking for test storage... 00:31:43.453 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:31:43.453 15:50:13 -- target/dif.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:31:43.453 15:50:13 -- nvmf/common.sh@7 -- # uname -s 00:31:43.453 15:50:13 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:43.453 15:50:13 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:43.453 15:50:13 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:43.453 15:50:13 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:43.453 15:50:13 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:43.453 15:50:13 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:43.453 15:50:13 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:43.453 15:50:13 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:43.453 15:50:13 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:43.453 15:50:13 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:43.453 15:50:13 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:31:43.453 15:50:13 -- nvmf/common.sh@18 -- # NVME_HOSTID=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:31:43.453 15:50:13 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:43.453 15:50:13 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:43.453 15:50:13 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:31:43.453 15:50:13 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:43.453 15:50:13 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:31:43.453 15:50:13 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:43.453 15:50:13 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:43.453 15:50:13 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:43.453 15:50:13 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:43.453 15:50:13 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:43.453 15:50:13 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:43.453 15:50:13 -- paths/export.sh@5 -- # export PATH 00:31:43.453 15:50:13 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:43.453 15:50:13 -- nvmf/common.sh@47 -- # : 0 00:31:43.453 15:50:13 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:43.453 15:50:13 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:43.453 15:50:13 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:43.453 15:50:13 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:43.453 15:50:13 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:43.453 15:50:13 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:43.453 15:50:13 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:43.453 15:50:13 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:43.453 15:50:13 -- target/dif.sh@15 -- # NULL_META=16 00:31:43.453 15:50:13 -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:31:43.453 15:50:13 -- target/dif.sh@15 -- # NULL_SIZE=64 00:31:43.453 15:50:13 -- target/dif.sh@15 -- # NULL_DIF=1 00:31:43.453 15:50:13 -- target/dif.sh@135 -- # nvmftestinit 00:31:43.453 15:50:13 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:31:43.453 15:50:13 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:43.453 15:50:13 -- nvmf/common.sh@437 -- # prepare_net_devs 00:31:43.453 15:50:13 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:31:43.453 15:50:13 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:31:43.453 15:50:13 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:43.453 15:50:13 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:31:43.453 15:50:13 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:43.453 15:50:13 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:31:43.453 15:50:13 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:31:43.453 15:50:13 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:31:43.453 15:50:13 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:31:43.453 15:50:13 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:31:43.453 15:50:13 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:31:43.453 15:50:13 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:43.453 15:50:13 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:43.453 15:50:13 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:31:43.453 15:50:13 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:31:43.453 15:50:13 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:31:43.453 15:50:13 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:31:43.453 15:50:13 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:31:43.453 15:50:13 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:43.453 15:50:13 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:31:43.453 15:50:13 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:31:43.453 15:50:13 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:31:43.453 15:50:13 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:31:43.453 15:50:13 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:31:43.453 15:50:13 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:31:43.453 Cannot find device "nvmf_tgt_br" 00:31:43.453 15:50:13 -- nvmf/common.sh@155 -- # true 00:31:43.453 15:50:13 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:31:43.712 Cannot find device "nvmf_tgt_br2" 00:31:43.712 15:50:13 -- nvmf/common.sh@156 -- # true 00:31:43.712 15:50:13 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:31:43.712 15:50:13 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:31:43.712 Cannot find device "nvmf_tgt_br" 00:31:43.712 15:50:13 -- nvmf/common.sh@158 -- # true 00:31:43.712 15:50:13 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:31:43.712 Cannot find device "nvmf_tgt_br2" 00:31:43.712 15:50:13 -- nvmf/common.sh@159 -- # true 00:31:43.712 15:50:13 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:31:43.712 15:50:13 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:31:43.712 15:50:13 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:31:43.712 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:43.712 15:50:13 -- nvmf/common.sh@162 -- # true 00:31:43.712 15:50:13 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:31:43.712 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:31:43.712 15:50:13 -- nvmf/common.sh@163 -- # true 00:31:43.712 15:50:13 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:31:43.712 15:50:13 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:31:43.712 15:50:13 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:31:43.712 15:50:13 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:31:43.712 15:50:13 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:31:43.712 15:50:13 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:31:43.712 15:50:13 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:31:43.712 15:50:13 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:31:43.712 15:50:13 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:31:43.712 15:50:13 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:31:43.712 15:50:13 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:31:43.712 15:50:13 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:31:43.712 15:50:13 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:31:43.712 15:50:13 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:31:43.712 15:50:13 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:31:43.712 15:50:13 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:31:43.712 15:50:13 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:31:43.712 15:50:13 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:31:43.712 15:50:13 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:31:43.712 15:50:13 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:31:43.970 15:50:14 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:31:43.970 15:50:14 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:31:43.970 15:50:14 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:31:43.970 15:50:14 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:31:43.970 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:43.970 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.155 ms 00:31:43.970 00:31:43.970 --- 10.0.0.2 ping statistics --- 00:31:43.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:43.970 rtt min/avg/max/mdev = 0.155/0.155/0.155/0.000 ms 00:31:43.970 15:50:14 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:31:43.970 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:31:43.970 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.074 ms 00:31:43.970 00:31:43.970 --- 10.0.0.3 ping statistics --- 00:31:43.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:43.970 rtt min/avg/max/mdev = 0.074/0.074/0.074/0.000 ms 00:31:43.970 15:50:14 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:31:43.970 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:43.970 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.043 ms 00:31:43.970 00:31:43.970 --- 10.0.0.1 ping statistics --- 00:31:43.970 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:43.970 rtt min/avg/max/mdev = 0.043/0.043/0.043/0.000 ms 00:31:43.970 15:50:14 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:43.970 15:50:14 -- nvmf/common.sh@422 -- # return 0 00:31:43.970 15:50:14 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:31:43.970 15:50:14 -- nvmf/common.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:31:44.228 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:31:44.228 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:31:44.228 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:31:44.228 15:50:14 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:44.228 15:50:14 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:31:44.228 15:50:14 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:31:44.228 15:50:14 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:44.228 15:50:14 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:31:44.228 15:50:14 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:31:44.228 15:50:14 -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:31:44.228 15:50:14 -- target/dif.sh@137 -- # nvmfappstart 00:31:44.228 15:50:14 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:31:44.228 15:50:14 -- common/autotest_common.sh@710 -- # xtrace_disable 00:31:44.228 15:50:14 -- common/autotest_common.sh@10 -- # set +x 00:31:44.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:44.228 15:50:14 -- nvmf/common.sh@470 -- # nvmfpid=90220 00:31:44.228 15:50:14 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:31:44.228 15:50:14 -- nvmf/common.sh@471 -- # waitforlisten 90220 00:31:44.228 15:50:14 -- common/autotest_common.sh@817 -- # '[' -z 90220 ']' 00:31:44.228 15:50:14 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:44.228 15:50:14 -- common/autotest_common.sh@822 -- # local max_retries=100 00:31:44.228 15:50:14 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:44.228 15:50:14 -- common/autotest_common.sh@826 -- # xtrace_disable 00:31:44.228 15:50:14 -- common/autotest_common.sh@10 -- # set +x 00:31:44.485 [2024-04-26 15:50:14.523021] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:31:44.485 [2024-04-26 15:50:14.523327] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:44.485 [2024-04-26 15:50:14.665072] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:44.743 [2024-04-26 15:50:14.782108] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:44.743 [2024-04-26 15:50:14.782190] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:44.743 [2024-04-26 15:50:14.782219] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:44.743 [2024-04-26 15:50:14.782228] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:44.743 [2024-04-26 15:50:14.782236] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:44.743 [2024-04-26 15:50:14.782292] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:31:45.675 15:50:15 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:31:45.675 15:50:15 -- common/autotest_common.sh@850 -- # return 0 00:31:45.676 15:50:15 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:31:45.676 15:50:15 -- common/autotest_common.sh@716 -- # xtrace_disable 00:31:45.676 15:50:15 -- common/autotest_common.sh@10 -- # set +x 00:31:45.676 15:50:15 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:45.676 15:50:15 -- target/dif.sh@139 -- # create_transport 00:31:45.676 15:50:15 -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:31:45.676 15:50:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:45.676 15:50:15 -- common/autotest_common.sh@10 -- # set +x 00:31:45.676 [2024-04-26 15:50:15.659261] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:45.676 15:50:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:45.676 15:50:15 -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:31:45.676 15:50:15 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:45.676 15:50:15 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:45.676 15:50:15 -- common/autotest_common.sh@10 -- # set +x 00:31:45.676 ************************************ 00:31:45.676 START TEST fio_dif_1_default 00:31:45.676 ************************************ 00:31:45.676 15:50:15 -- common/autotest_common.sh@1111 -- # fio_dif_1 00:31:45.676 15:50:15 -- target/dif.sh@86 -- # create_subsystems 0 00:31:45.676 15:50:15 -- target/dif.sh@28 -- # local sub 00:31:45.676 15:50:15 -- target/dif.sh@30 -- # for sub in "$@" 00:31:45.676 15:50:15 -- target/dif.sh@31 -- # create_subsystem 0 00:31:45.676 15:50:15 -- target/dif.sh@18 -- # local sub_id=0 00:31:45.676 15:50:15 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:45.676 15:50:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:45.676 15:50:15 -- common/autotest_common.sh@10 -- # set +x 00:31:45.676 bdev_null0 00:31:45.676 15:50:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:45.676 15:50:15 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:45.676 15:50:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:45.676 15:50:15 -- common/autotest_common.sh@10 -- # set +x 00:31:45.676 15:50:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:45.676 15:50:15 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:45.676 15:50:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:45.676 15:50:15 -- common/autotest_common.sh@10 -- # set +x 00:31:45.676 15:50:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:45.676 15:50:15 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:45.676 15:50:15 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:45.676 15:50:15 -- common/autotest_common.sh@10 -- # set +x 00:31:45.676 [2024-04-26 15:50:15.771357] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:45.676 15:50:15 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:45.676 15:50:15 -- target/dif.sh@87 -- # fio /dev/fd/62 00:31:45.676 15:50:15 -- target/dif.sh@87 -- # create_json_sub_conf 0 00:31:45.676 15:50:15 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:31:45.676 15:50:15 -- nvmf/common.sh@521 -- # config=() 00:31:45.676 15:50:15 -- nvmf/common.sh@521 -- # local subsystem config 00:31:45.676 15:50:15 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:45.676 15:50:15 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:31:45.676 15:50:15 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:45.676 15:50:15 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:31:45.676 { 00:31:45.676 "params": { 00:31:45.676 "name": "Nvme$subsystem", 00:31:45.676 "trtype": "$TEST_TRANSPORT", 00:31:45.676 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:45.676 "adrfam": "ipv4", 00:31:45.676 "trsvcid": "$NVMF_PORT", 00:31:45.676 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:45.676 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:45.676 "hdgst": ${hdgst:-false}, 00:31:45.676 "ddgst": ${ddgst:-false} 00:31:45.676 }, 00:31:45.676 "method": "bdev_nvme_attach_controller" 00:31:45.676 } 00:31:45.676 EOF 00:31:45.676 )") 00:31:45.676 15:50:15 -- target/dif.sh@82 -- # gen_fio_conf 00:31:45.676 15:50:15 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:31:45.676 15:50:15 -- target/dif.sh@54 -- # local file 00:31:45.676 15:50:15 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:45.676 15:50:15 -- target/dif.sh@56 -- # cat 00:31:45.676 15:50:15 -- common/autotest_common.sh@1325 -- # local sanitizers 00:31:45.676 15:50:15 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:45.676 15:50:15 -- common/autotest_common.sh@1327 -- # shift 00:31:45.676 15:50:15 -- nvmf/common.sh@543 -- # cat 00:31:45.676 15:50:15 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:31:45.676 15:50:15 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:31:45.676 15:50:15 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:45.676 15:50:15 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:45.676 15:50:15 -- target/dif.sh@72 -- # (( file <= files )) 00:31:45.676 15:50:15 -- common/autotest_common.sh@1331 -- # grep libasan 00:31:45.676 15:50:15 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:31:45.676 15:50:15 -- nvmf/common.sh@545 -- # jq . 00:31:45.676 15:50:15 -- nvmf/common.sh@546 -- # IFS=, 00:31:45.676 15:50:15 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:31:45.676 "params": { 00:31:45.676 "name": "Nvme0", 00:31:45.676 "trtype": "tcp", 00:31:45.676 "traddr": "10.0.0.2", 00:31:45.676 "adrfam": "ipv4", 00:31:45.676 "trsvcid": "4420", 00:31:45.676 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:45.676 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:45.676 "hdgst": false, 00:31:45.676 "ddgst": false 00:31:45.676 }, 00:31:45.676 "method": "bdev_nvme_attach_controller" 00:31:45.676 }' 00:31:45.676 15:50:15 -- common/autotest_common.sh@1331 -- # asan_lib= 00:31:45.676 15:50:15 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:31:45.676 15:50:15 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:31:45.676 15:50:15 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:45.676 15:50:15 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:31:45.676 15:50:15 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:31:45.676 15:50:15 -- common/autotest_common.sh@1331 -- # asan_lib= 00:31:45.676 15:50:15 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:31:45.676 15:50:15 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:31:45.676 15:50:15 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:45.934 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:45.934 fio-3.35 00:31:45.934 Starting 1 thread 00:31:58.138 00:31:58.138 filename0: (groupid=0, jobs=1): err= 0: pid=90314: Fri Apr 26 15:50:26 2024 00:31:58.138 read: IOPS=4553, BW=17.8MiB/s (18.7MB/s)(178MiB/10001msec) 00:31:58.138 slat (nsec): min=6066, max=55003, avg=8243.24, stdev=3056.66 00:31:58.138 clat (usec): min=377, max=42001, avg=854.32, stdev=3870.19 00:31:58.138 lat (usec): min=384, max=42011, avg=862.56, stdev=3870.24 00:31:58.138 clat percentiles (usec): 00:31:58.138 | 1.00th=[ 408], 5.00th=[ 429], 10.00th=[ 441], 20.00th=[ 457], 00:31:58.138 | 30.00th=[ 465], 40.00th=[ 474], 50.00th=[ 482], 60.00th=[ 486], 00:31:58.138 | 70.00th=[ 494], 80.00th=[ 506], 90.00th=[ 523], 95.00th=[ 537], 00:31:58.138 | 99.00th=[ 635], 99.50th=[40633], 99.90th=[41157], 99.95th=[41681], 00:31:58.138 | 99.99th=[41681] 00:31:58.138 bw ( KiB/s): min=13216, max=26720, per=99.94%, avg=18204.63, stdev=3166.40, samples=19 00:31:58.138 iops : min= 3304, max= 6680, avg=4551.16, stdev=791.60, samples=19 00:31:58.138 lat (usec) : 500=75.26%, 750=23.79%, 1000=0.01% 00:31:58.138 lat (msec) : 2=0.01%, 10=0.01%, 50=0.92% 00:31:58.138 cpu : usr=89.00%, sys=9.26%, ctx=18, majf=0, minf=0 00:31:58.138 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:58.138 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:58.138 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:58.138 issued rwts: total=45544,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:58.138 latency : target=0, window=0, percentile=100.00%, depth=4 00:31:58.138 00:31:58.138 Run status group 0 (all jobs): 00:31:58.138 READ: bw=17.8MiB/s (18.7MB/s), 17.8MiB/s-17.8MiB/s (18.7MB/s-18.7MB/s), io=178MiB (187MB), run=10001-10001msec 00:31:58.138 15:50:26 -- target/dif.sh@88 -- # destroy_subsystems 0 00:31:58.138 15:50:26 -- target/dif.sh@43 -- # local sub 00:31:58.138 15:50:26 -- target/dif.sh@45 -- # for sub in "$@" 00:31:58.138 15:50:26 -- target/dif.sh@46 -- # destroy_subsystem 0 00:31:58.138 15:50:26 -- target/dif.sh@36 -- # local sub_id=0 00:31:58.139 15:50:26 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:31:58.139 15:50:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:58.139 15:50:26 -- common/autotest_common.sh@10 -- # set +x 00:31:58.139 15:50:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:58.139 15:50:26 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:31:58.139 15:50:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:58.139 15:50:26 -- common/autotest_common.sh@10 -- # set +x 00:31:58.139 ************************************ 00:31:58.139 END TEST fio_dif_1_default 00:31:58.139 ************************************ 00:31:58.139 15:50:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:58.139 00:31:58.139 real 0m11.064s 00:31:58.139 user 0m9.611s 00:31:58.139 sys 0m1.192s 00:31:58.139 15:50:26 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:31:58.139 15:50:26 -- common/autotest_common.sh@10 -- # set +x 00:31:58.139 15:50:26 -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:31:58.139 15:50:26 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:31:58.139 15:50:26 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:31:58.139 15:50:26 -- common/autotest_common.sh@10 -- # set +x 00:31:58.139 ************************************ 00:31:58.139 START TEST fio_dif_1_multi_subsystems 00:31:58.139 ************************************ 00:31:58.139 15:50:26 -- common/autotest_common.sh@1111 -- # fio_dif_1_multi_subsystems 00:31:58.139 15:50:26 -- target/dif.sh@92 -- # local files=1 00:31:58.139 15:50:26 -- target/dif.sh@94 -- # create_subsystems 0 1 00:31:58.139 15:50:26 -- target/dif.sh@28 -- # local sub 00:31:58.139 15:50:26 -- target/dif.sh@30 -- # for sub in "$@" 00:31:58.139 15:50:26 -- target/dif.sh@31 -- # create_subsystem 0 00:31:58.139 15:50:26 -- target/dif.sh@18 -- # local sub_id=0 00:31:58.139 15:50:26 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:31:58.139 15:50:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:58.139 15:50:26 -- common/autotest_common.sh@10 -- # set +x 00:31:58.139 bdev_null0 00:31:58.139 15:50:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:58.139 15:50:26 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:31:58.139 15:50:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:58.139 15:50:26 -- common/autotest_common.sh@10 -- # set +x 00:31:58.139 15:50:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:58.139 15:50:26 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:31:58.139 15:50:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:58.139 15:50:26 -- common/autotest_common.sh@10 -- # set +x 00:31:58.139 15:50:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:58.139 15:50:26 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:58.139 15:50:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:58.139 15:50:26 -- common/autotest_common.sh@10 -- # set +x 00:31:58.139 [2024-04-26 15:50:26.961486] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:58.139 15:50:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:58.139 15:50:26 -- target/dif.sh@30 -- # for sub in "$@" 00:31:58.139 15:50:26 -- target/dif.sh@31 -- # create_subsystem 1 00:31:58.139 15:50:26 -- target/dif.sh@18 -- # local sub_id=1 00:31:58.139 15:50:26 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:31:58.139 15:50:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:58.139 15:50:26 -- common/autotest_common.sh@10 -- # set +x 00:31:58.139 bdev_null1 00:31:58.139 15:50:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:58.139 15:50:26 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:31:58.139 15:50:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:58.139 15:50:26 -- common/autotest_common.sh@10 -- # set +x 00:31:58.139 15:50:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:58.139 15:50:26 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:31:58.139 15:50:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:58.139 15:50:26 -- common/autotest_common.sh@10 -- # set +x 00:31:58.139 15:50:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:58.139 15:50:26 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:58.139 15:50:26 -- common/autotest_common.sh@549 -- # xtrace_disable 00:31:58.139 15:50:26 -- common/autotest_common.sh@10 -- # set +x 00:31:58.139 15:50:26 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:31:58.139 15:50:26 -- target/dif.sh@95 -- # fio /dev/fd/62 00:31:58.139 15:50:26 -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:31:58.139 15:50:26 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:31:58.139 15:50:27 -- nvmf/common.sh@521 -- # config=() 00:31:58.139 15:50:27 -- nvmf/common.sh@521 -- # local subsystem config 00:31:58.139 15:50:27 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:31:58.139 15:50:27 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:31:58.139 { 00:31:58.139 "params": { 00:31:58.139 "name": "Nvme$subsystem", 00:31:58.139 "trtype": "$TEST_TRANSPORT", 00:31:58.139 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:58.139 "adrfam": "ipv4", 00:31:58.139 "trsvcid": "$NVMF_PORT", 00:31:58.139 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:58.139 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:58.139 "hdgst": ${hdgst:-false}, 00:31:58.139 "ddgst": ${ddgst:-false} 00:31:58.139 }, 00:31:58.139 "method": "bdev_nvme_attach_controller" 00:31:58.139 } 00:31:58.139 EOF 00:31:58.139 )") 00:31:58.139 15:50:27 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:58.139 15:50:27 -- target/dif.sh@82 -- # gen_fio_conf 00:31:58.139 15:50:27 -- target/dif.sh@54 -- # local file 00:31:58.139 15:50:27 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:58.139 15:50:27 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:31:58.139 15:50:27 -- target/dif.sh@56 -- # cat 00:31:58.139 15:50:27 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:58.139 15:50:27 -- common/autotest_common.sh@1325 -- # local sanitizers 00:31:58.139 15:50:27 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:58.139 15:50:27 -- common/autotest_common.sh@1327 -- # shift 00:31:58.139 15:50:27 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:31:58.139 15:50:27 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:31:58.139 15:50:27 -- target/dif.sh@72 -- # (( file = 1 )) 00:31:58.139 15:50:27 -- target/dif.sh@72 -- # (( file <= files )) 00:31:58.139 15:50:27 -- target/dif.sh@73 -- # cat 00:31:58.139 15:50:27 -- nvmf/common.sh@543 -- # cat 00:31:58.139 15:50:27 -- common/autotest_common.sh@1331 -- # grep libasan 00:31:58.139 15:50:27 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:58.139 15:50:27 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:31:58.139 15:50:27 -- target/dif.sh@72 -- # (( file++ )) 00:31:58.139 15:50:27 -- target/dif.sh@72 -- # (( file <= files )) 00:31:58.139 15:50:27 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:31:58.139 15:50:27 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:31:58.139 { 00:31:58.139 "params": { 00:31:58.139 "name": "Nvme$subsystem", 00:31:58.139 "trtype": "$TEST_TRANSPORT", 00:31:58.139 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:58.139 "adrfam": "ipv4", 00:31:58.139 "trsvcid": "$NVMF_PORT", 00:31:58.139 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:58.139 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:58.139 "hdgst": ${hdgst:-false}, 00:31:58.139 "ddgst": ${ddgst:-false} 00:31:58.139 }, 00:31:58.139 "method": "bdev_nvme_attach_controller" 00:31:58.139 } 00:31:58.139 EOF 00:31:58.139 )") 00:31:58.139 15:50:27 -- nvmf/common.sh@543 -- # cat 00:31:58.139 15:50:27 -- nvmf/common.sh@545 -- # jq . 00:31:58.139 15:50:27 -- nvmf/common.sh@546 -- # IFS=, 00:31:58.139 15:50:27 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:31:58.139 "params": { 00:31:58.139 "name": "Nvme0", 00:31:58.139 "trtype": "tcp", 00:31:58.139 "traddr": "10.0.0.2", 00:31:58.139 "adrfam": "ipv4", 00:31:58.139 "trsvcid": "4420", 00:31:58.139 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:58.139 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:58.139 "hdgst": false, 00:31:58.139 "ddgst": false 00:31:58.139 }, 00:31:58.139 "method": "bdev_nvme_attach_controller" 00:31:58.139 },{ 00:31:58.139 "params": { 00:31:58.139 "name": "Nvme1", 00:31:58.139 "trtype": "tcp", 00:31:58.139 "traddr": "10.0.0.2", 00:31:58.139 "adrfam": "ipv4", 00:31:58.139 "trsvcid": "4420", 00:31:58.139 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:58.139 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:58.139 "hdgst": false, 00:31:58.139 "ddgst": false 00:31:58.139 }, 00:31:58.139 "method": "bdev_nvme_attach_controller" 00:31:58.139 }' 00:31:58.139 15:50:27 -- common/autotest_common.sh@1331 -- # asan_lib= 00:31:58.139 15:50:27 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:31:58.139 15:50:27 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:31:58.139 15:50:27 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:31:58.139 15:50:27 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:31:58.139 15:50:27 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:31:58.139 15:50:27 -- common/autotest_common.sh@1331 -- # asan_lib= 00:31:58.139 15:50:27 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:31:58.139 15:50:27 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:31:58.139 15:50:27 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:31:58.139 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:58.139 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:31:58.139 fio-3.35 00:31:58.139 Starting 2 threads 00:32:08.099 00:32:08.099 filename0: (groupid=0, jobs=1): err= 0: pid=90477: Fri Apr 26 15:50:37 2024 00:32:08.099 read: IOPS=210, BW=844KiB/s (864kB/s)(8448KiB/10014msec) 00:32:08.099 slat (nsec): min=6959, max=71993, avg=10983.99, stdev=7743.00 00:32:08.099 clat (usec): min=443, max=42075, avg=18929.37, stdev=20197.14 00:32:08.099 lat (usec): min=450, max=42100, avg=18940.35, stdev=20197.15 00:32:08.099 clat percentiles (usec): 00:32:08.099 | 1.00th=[ 465], 5.00th=[ 482], 10.00th=[ 490], 20.00th=[ 506], 00:32:08.099 | 30.00th=[ 523], 40.00th=[ 545], 50.00th=[ 676], 60.00th=[40633], 00:32:08.099 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:32:08.099 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:32:08.099 | 99.99th=[42206] 00:32:08.099 bw ( KiB/s): min= 576, max= 1312, per=45.07%, avg=843.25, stdev=201.93, samples=20 00:32:08.099 iops : min= 144, max= 328, avg=210.80, stdev=50.48, samples=20 00:32:08.099 lat (usec) : 500=17.28%, 750=33.10%, 1000=3.22% 00:32:08.099 lat (msec) : 2=0.95%, 10=0.19%, 50=45.27% 00:32:08.099 cpu : usr=95.34%, sys=4.18%, ctx=14, majf=0, minf=0 00:32:08.099 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:08.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.099 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.099 issued rwts: total=2112,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:08.099 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:08.099 filename1: (groupid=0, jobs=1): err= 0: pid=90478: Fri Apr 26 15:50:37 2024 00:32:08.099 read: IOPS=257, BW=1028KiB/s (1053kB/s)(10.1MiB/10035msec) 00:32:08.099 slat (nsec): min=5317, max=50993, avg=9502.23, stdev=4586.14 00:32:08.099 clat (usec): min=438, max=42191, avg=15528.35, stdev=19588.45 00:32:08.099 lat (usec): min=449, max=42219, avg=15537.85, stdev=19588.19 00:32:08.099 clat percentiles (usec): 00:32:08.099 | 1.00th=[ 461], 5.00th=[ 469], 10.00th=[ 478], 20.00th=[ 490], 00:32:08.099 | 30.00th=[ 498], 40.00th=[ 515], 50.00th=[ 545], 60.00th=[ 848], 00:32:08.099 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:32:08.099 | 99.00th=[41681], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:32:08.099 | 99.99th=[42206] 00:32:08.099 bw ( KiB/s): min= 736, max= 1408, per=55.07%, avg=1030.40, stdev=200.67, samples=20 00:32:08.099 iops : min= 184, max= 352, avg=257.60, stdev=50.17, samples=20 00:32:08.099 lat (usec) : 500=30.43%, 750=27.52%, 1000=4.19% 00:32:08.099 lat (msec) : 2=0.81%, 10=0.16%, 50=36.90% 00:32:08.099 cpu : usr=95.03%, sys=4.54%, ctx=15, majf=0, minf=0 00:32:08.099 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:08.099 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.099 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:08.099 issued rwts: total=2580,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:08.099 latency : target=0, window=0, percentile=100.00%, depth=4 00:32:08.099 00:32:08.099 Run status group 0 (all jobs): 00:32:08.099 READ: bw=1870KiB/s (1915kB/s), 844KiB/s-1028KiB/s (864kB/s-1053kB/s), io=18.3MiB (19.2MB), run=10014-10035msec 00:32:08.099 15:50:38 -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:32:08.099 15:50:38 -- target/dif.sh@43 -- # local sub 00:32:08.099 15:50:38 -- target/dif.sh@45 -- # for sub in "$@" 00:32:08.099 15:50:38 -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:08.099 15:50:38 -- target/dif.sh@36 -- # local sub_id=0 00:32:08.099 15:50:38 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:08.099 15:50:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:08.099 15:50:38 -- common/autotest_common.sh@10 -- # set +x 00:32:08.099 15:50:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:08.099 15:50:38 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:08.099 15:50:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:08.099 15:50:38 -- common/autotest_common.sh@10 -- # set +x 00:32:08.099 15:50:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:08.099 15:50:38 -- target/dif.sh@45 -- # for sub in "$@" 00:32:08.099 15:50:38 -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:08.099 15:50:38 -- target/dif.sh@36 -- # local sub_id=1 00:32:08.099 15:50:38 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:08.099 15:50:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:08.099 15:50:38 -- common/autotest_common.sh@10 -- # set +x 00:32:08.099 15:50:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:08.099 15:50:38 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:08.099 15:50:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:08.099 15:50:38 -- common/autotest_common.sh@10 -- # set +x 00:32:08.099 ************************************ 00:32:08.099 END TEST fio_dif_1_multi_subsystems 00:32:08.099 ************************************ 00:32:08.099 15:50:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:08.099 00:32:08.099 real 0m11.250s 00:32:08.099 user 0m19.944s 00:32:08.099 sys 0m1.157s 00:32:08.099 15:50:38 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:08.099 15:50:38 -- common/autotest_common.sh@10 -- # set +x 00:32:08.099 15:50:38 -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:32:08.099 15:50:38 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:08.099 15:50:38 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:08.099 15:50:38 -- common/autotest_common.sh@10 -- # set +x 00:32:08.099 ************************************ 00:32:08.099 START TEST fio_dif_rand_params 00:32:08.099 ************************************ 00:32:08.099 15:50:38 -- common/autotest_common.sh@1111 -- # fio_dif_rand_params 00:32:08.099 15:50:38 -- target/dif.sh@100 -- # local NULL_DIF 00:32:08.099 15:50:38 -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:32:08.099 15:50:38 -- target/dif.sh@103 -- # NULL_DIF=3 00:32:08.099 15:50:38 -- target/dif.sh@103 -- # bs=128k 00:32:08.099 15:50:38 -- target/dif.sh@103 -- # numjobs=3 00:32:08.099 15:50:38 -- target/dif.sh@103 -- # iodepth=3 00:32:08.099 15:50:38 -- target/dif.sh@103 -- # runtime=5 00:32:08.099 15:50:38 -- target/dif.sh@105 -- # create_subsystems 0 00:32:08.099 15:50:38 -- target/dif.sh@28 -- # local sub 00:32:08.099 15:50:38 -- target/dif.sh@30 -- # for sub in "$@" 00:32:08.099 15:50:38 -- target/dif.sh@31 -- # create_subsystem 0 00:32:08.099 15:50:38 -- target/dif.sh@18 -- # local sub_id=0 00:32:08.099 15:50:38 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:32:08.099 15:50:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:08.099 15:50:38 -- common/autotest_common.sh@10 -- # set +x 00:32:08.099 bdev_null0 00:32:08.099 15:50:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:08.099 15:50:38 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:08.099 15:50:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:08.099 15:50:38 -- common/autotest_common.sh@10 -- # set +x 00:32:08.099 15:50:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:08.099 15:50:38 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:08.099 15:50:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:08.099 15:50:38 -- common/autotest_common.sh@10 -- # set +x 00:32:08.099 15:50:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:08.099 15:50:38 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:08.099 15:50:38 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:08.099 15:50:38 -- common/autotest_common.sh@10 -- # set +x 00:32:08.099 [2024-04-26 15:50:38.348338] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:08.099 15:50:38 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:08.099 15:50:38 -- target/dif.sh@106 -- # fio /dev/fd/62 00:32:08.099 15:50:38 -- target/dif.sh@106 -- # create_json_sub_conf 0 00:32:08.099 15:50:38 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:08.099 15:50:38 -- nvmf/common.sh@521 -- # config=() 00:32:08.099 15:50:38 -- nvmf/common.sh@521 -- # local subsystem config 00:32:08.099 15:50:38 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:08.099 15:50:38 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:32:08.099 15:50:38 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:08.099 15:50:38 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:32:08.099 { 00:32:08.099 "params": { 00:32:08.099 "name": "Nvme$subsystem", 00:32:08.099 "trtype": "$TEST_TRANSPORT", 00:32:08.099 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:08.099 "adrfam": "ipv4", 00:32:08.099 "trsvcid": "$NVMF_PORT", 00:32:08.099 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:08.099 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:08.099 "hdgst": ${hdgst:-false}, 00:32:08.099 "ddgst": ${ddgst:-false} 00:32:08.099 }, 00:32:08.099 "method": "bdev_nvme_attach_controller" 00:32:08.099 } 00:32:08.099 EOF 00:32:08.099 )") 00:32:08.099 15:50:38 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:32:08.099 15:50:38 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:08.099 15:50:38 -- common/autotest_common.sh@1325 -- # local sanitizers 00:32:08.099 15:50:38 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:08.099 15:50:38 -- common/autotest_common.sh@1327 -- # shift 00:32:08.099 15:50:38 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:32:08.099 15:50:38 -- target/dif.sh@82 -- # gen_fio_conf 00:32:08.099 15:50:38 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:32:08.099 15:50:38 -- target/dif.sh@54 -- # local file 00:32:08.099 15:50:38 -- nvmf/common.sh@543 -- # cat 00:32:08.099 15:50:38 -- target/dif.sh@56 -- # cat 00:32:08.099 15:50:38 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:08.099 15:50:38 -- common/autotest_common.sh@1331 -- # grep libasan 00:32:08.099 15:50:38 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:32:08.099 15:50:38 -- nvmf/common.sh@545 -- # jq . 00:32:08.099 15:50:38 -- nvmf/common.sh@546 -- # IFS=, 00:32:08.099 15:50:38 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:32:08.099 "params": { 00:32:08.099 "name": "Nvme0", 00:32:08.099 "trtype": "tcp", 00:32:08.099 "traddr": "10.0.0.2", 00:32:08.099 "adrfam": "ipv4", 00:32:08.099 "trsvcid": "4420", 00:32:08.099 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:08.099 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:08.099 "hdgst": false, 00:32:08.099 "ddgst": false 00:32:08.099 }, 00:32:08.099 "method": "bdev_nvme_attach_controller" 00:32:08.099 }' 00:32:08.099 15:50:38 -- target/dif.sh@72 -- # (( file = 1 )) 00:32:08.099 15:50:38 -- target/dif.sh@72 -- # (( file <= files )) 00:32:08.099 15:50:38 -- common/autotest_common.sh@1331 -- # asan_lib= 00:32:08.099 15:50:38 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:32:08.099 15:50:38 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:32:08.099 15:50:38 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:08.099 15:50:38 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:32:08.100 15:50:38 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:32:08.357 15:50:38 -- common/autotest_common.sh@1331 -- # asan_lib= 00:32:08.357 15:50:38 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:32:08.357 15:50:38 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:32:08.357 15:50:38 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:08.357 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:32:08.357 ... 00:32:08.357 fio-3.35 00:32:08.357 Starting 3 threads 00:32:14.910 00:32:14.910 filename0: (groupid=0, jobs=1): err= 0: pid=90639: Fri Apr 26 15:50:44 2024 00:32:14.910 read: IOPS=265, BW=33.2MiB/s (34.8MB/s)(166MiB/5007msec) 00:32:14.910 slat (nsec): min=7108, max=64476, avg=12542.79, stdev=4411.96 00:32:14.910 clat (usec): min=6212, max=52721, avg=11262.38, stdev=3965.64 00:32:14.910 lat (usec): min=6224, max=52737, avg=11274.92, stdev=3965.81 00:32:14.910 clat percentiles (usec): 00:32:14.910 | 1.00th=[ 7767], 5.00th=[ 9241], 10.00th=[ 9896], 20.00th=[10290], 00:32:14.910 | 30.00th=[10552], 40.00th=[10814], 50.00th=[11076], 60.00th=[11207], 00:32:14.910 | 70.00th=[11469], 80.00th=[11731], 90.00th=[11994], 95.00th=[12256], 00:32:14.910 | 99.00th=[13042], 99.50th=[51643], 99.90th=[52167], 99.95th=[52691], 00:32:14.910 | 99.99th=[52691] 00:32:14.910 bw ( KiB/s): min=31488, max=35328, per=37.10%, avg=34022.40, stdev=1538.13, samples=10 00:32:14.910 iops : min= 246, max= 276, avg=265.80, stdev=12.02, samples=10 00:32:14.910 lat (msec) : 10=14.27%, 20=84.82%, 100=0.90% 00:32:14.910 cpu : usr=92.13%, sys=6.35%, ctx=6, majf=0, minf=0 00:32:14.910 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:14.910 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:14.910 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:14.910 issued rwts: total=1331,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:14.910 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:14.910 filename0: (groupid=0, jobs=1): err= 0: pid=90640: Fri Apr 26 15:50:44 2024 00:32:14.910 read: IOPS=248, BW=31.0MiB/s (32.5MB/s)(155MiB/5005msec) 00:32:14.910 slat (nsec): min=7059, max=40466, avg=11767.61, stdev=4161.10 00:32:14.911 clat (usec): min=6352, max=52748, avg=12066.49, stdev=3096.65 00:32:14.911 lat (usec): min=6362, max=52774, avg=12078.26, stdev=3096.80 00:32:14.911 clat percentiles (usec): 00:32:14.911 | 1.00th=[ 7111], 5.00th=[ 8717], 10.00th=[10552], 20.00th=[11207], 00:32:14.911 | 30.00th=[11731], 40.00th=[11994], 50.00th=[12125], 60.00th=[12256], 00:32:14.911 | 70.00th=[12518], 80.00th=[12649], 90.00th=[13042], 95.00th=[13304], 00:32:14.911 | 99.00th=[14353], 99.50th=[14877], 99.90th=[52691], 99.95th=[52691], 00:32:14.911 | 99.99th=[52691] 00:32:14.911 bw ( KiB/s): min=30208, max=33792, per=34.52%, avg=31658.67, stdev=1123.20, samples=9 00:32:14.911 iops : min= 236, max= 264, avg=247.33, stdev= 8.77, samples=9 00:32:14.911 lat (msec) : 10=5.88%, 20=93.64%, 100=0.48% 00:32:14.911 cpu : usr=92.23%, sys=6.37%, ctx=6, majf=0, minf=0 00:32:14.911 IO depths : 1=9.7%, 2=90.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:14.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:14.911 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:14.911 issued rwts: total=1242,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:14.911 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:14.911 filename0: (groupid=0, jobs=1): err= 0: pid=90641: Fri Apr 26 15:50:44 2024 00:32:14.911 read: IOPS=202, BW=25.3MiB/s (26.6MB/s)(127MiB/5005msec) 00:32:14.911 slat (nsec): min=7011, max=55592, avg=12009.62, stdev=6209.12 00:32:14.911 clat (usec): min=8440, max=17355, avg=14776.28, stdev=1674.90 00:32:14.911 lat (usec): min=8448, max=17376, avg=14788.29, stdev=1675.53 00:32:14.911 clat percentiles (usec): 00:32:14.911 | 1.00th=[ 9110], 5.00th=[10028], 10.00th=[13566], 20.00th=[14222], 00:32:14.911 | 30.00th=[14484], 40.00th=[14746], 50.00th=[15008], 60.00th=[15401], 00:32:14.911 | 70.00th=[15533], 80.00th=[15795], 90.00th=[16450], 95.00th=[16712], 00:32:14.911 | 99.00th=[17171], 99.50th=[17171], 99.90th=[17433], 99.95th=[17433], 00:32:14.911 | 99.99th=[17433] 00:32:14.911 bw ( KiB/s): min=24576, max=28416, per=28.38%, avg=26026.67, stdev=1459.42, samples=9 00:32:14.911 iops : min= 192, max= 222, avg=203.33, stdev=11.40, samples=9 00:32:14.911 lat (msec) : 10=5.33%, 20=94.67% 00:32:14.911 cpu : usr=92.47%, sys=6.14%, ctx=6, majf=0, minf=9 00:32:14.911 IO depths : 1=33.3%, 2=66.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:14.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:14.911 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:14.911 issued rwts: total=1014,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:14.911 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:14.911 00:32:14.911 Run status group 0 (all jobs): 00:32:14.911 READ: bw=89.5MiB/s (93.9MB/s), 25.3MiB/s-33.2MiB/s (26.6MB/s-34.8MB/s), io=448MiB (470MB), run=5005-5007msec 00:32:14.911 15:50:44 -- target/dif.sh@107 -- # destroy_subsystems 0 00:32:14.911 15:50:44 -- target/dif.sh@43 -- # local sub 00:32:14.911 15:50:44 -- target/dif.sh@45 -- # for sub in "$@" 00:32:14.911 15:50:44 -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:14.911 15:50:44 -- target/dif.sh@36 -- # local sub_id=0 00:32:14.911 15:50:44 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:14.911 15:50:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:14.911 15:50:44 -- common/autotest_common.sh@10 -- # set +x 00:32:14.911 15:50:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:14.911 15:50:44 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:14.911 15:50:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:14.911 15:50:44 -- common/autotest_common.sh@10 -- # set +x 00:32:14.911 15:50:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:14.911 15:50:44 -- target/dif.sh@109 -- # NULL_DIF=2 00:32:14.911 15:50:44 -- target/dif.sh@109 -- # bs=4k 00:32:14.911 15:50:44 -- target/dif.sh@109 -- # numjobs=8 00:32:14.911 15:50:44 -- target/dif.sh@109 -- # iodepth=16 00:32:14.911 15:50:44 -- target/dif.sh@109 -- # runtime= 00:32:14.911 15:50:44 -- target/dif.sh@109 -- # files=2 00:32:14.911 15:50:44 -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:32:14.911 15:50:44 -- target/dif.sh@28 -- # local sub 00:32:14.911 15:50:44 -- target/dif.sh@30 -- # for sub in "$@" 00:32:14.911 15:50:44 -- target/dif.sh@31 -- # create_subsystem 0 00:32:14.911 15:50:44 -- target/dif.sh@18 -- # local sub_id=0 00:32:14.911 15:50:44 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:32:14.911 15:50:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:14.911 15:50:44 -- common/autotest_common.sh@10 -- # set +x 00:32:14.911 bdev_null0 00:32:14.911 15:50:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:14.911 15:50:44 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:14.911 15:50:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:14.911 15:50:44 -- common/autotest_common.sh@10 -- # set +x 00:32:14.911 15:50:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:14.911 15:50:44 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:14.911 15:50:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:14.911 15:50:44 -- common/autotest_common.sh@10 -- # set +x 00:32:14.911 15:50:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:14.911 15:50:44 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:14.911 15:50:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:14.911 15:50:44 -- common/autotest_common.sh@10 -- # set +x 00:32:14.911 [2024-04-26 15:50:44.435662] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:14.911 15:50:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:14.911 15:50:44 -- target/dif.sh@30 -- # for sub in "$@" 00:32:14.911 15:50:44 -- target/dif.sh@31 -- # create_subsystem 1 00:32:14.911 15:50:44 -- target/dif.sh@18 -- # local sub_id=1 00:32:14.911 15:50:44 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:32:14.911 15:50:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:14.911 15:50:44 -- common/autotest_common.sh@10 -- # set +x 00:32:14.911 bdev_null1 00:32:14.911 15:50:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:14.911 15:50:44 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:14.911 15:50:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:14.911 15:50:44 -- common/autotest_common.sh@10 -- # set +x 00:32:14.911 15:50:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:14.911 15:50:44 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:14.911 15:50:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:14.911 15:50:44 -- common/autotest_common.sh@10 -- # set +x 00:32:14.911 15:50:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:14.911 15:50:44 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:14.911 15:50:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:14.911 15:50:44 -- common/autotest_common.sh@10 -- # set +x 00:32:14.911 15:50:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:14.911 15:50:44 -- target/dif.sh@30 -- # for sub in "$@" 00:32:14.911 15:50:44 -- target/dif.sh@31 -- # create_subsystem 2 00:32:14.911 15:50:44 -- target/dif.sh@18 -- # local sub_id=2 00:32:14.911 15:50:44 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:32:14.911 15:50:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:14.911 15:50:44 -- common/autotest_common.sh@10 -- # set +x 00:32:14.911 bdev_null2 00:32:14.911 15:50:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:14.911 15:50:44 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:32:14.911 15:50:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:14.911 15:50:44 -- common/autotest_common.sh@10 -- # set +x 00:32:14.911 15:50:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:14.911 15:50:44 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:32:14.911 15:50:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:14.911 15:50:44 -- common/autotest_common.sh@10 -- # set +x 00:32:14.911 15:50:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:14.911 15:50:44 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:14.911 15:50:44 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:14.911 15:50:44 -- common/autotest_common.sh@10 -- # set +x 00:32:14.911 15:50:44 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:14.911 15:50:44 -- target/dif.sh@112 -- # fio /dev/fd/62 00:32:14.911 15:50:44 -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:32:14.911 15:50:44 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:32:14.911 15:50:44 -- nvmf/common.sh@521 -- # config=() 00:32:14.911 15:50:44 -- nvmf/common.sh@521 -- # local subsystem config 00:32:14.911 15:50:44 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:14.911 15:50:44 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:32:14.911 15:50:44 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:32:14.911 { 00:32:14.911 "params": { 00:32:14.911 "name": "Nvme$subsystem", 00:32:14.911 "trtype": "$TEST_TRANSPORT", 00:32:14.911 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:14.911 "adrfam": "ipv4", 00:32:14.911 "trsvcid": "$NVMF_PORT", 00:32:14.911 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:14.911 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:14.911 "hdgst": ${hdgst:-false}, 00:32:14.911 "ddgst": ${ddgst:-false} 00:32:14.911 }, 00:32:14.911 "method": "bdev_nvme_attach_controller" 00:32:14.911 } 00:32:14.911 EOF 00:32:14.911 )") 00:32:14.911 15:50:44 -- target/dif.sh@82 -- # gen_fio_conf 00:32:14.911 15:50:44 -- target/dif.sh@54 -- # local file 00:32:14.911 15:50:44 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:14.911 15:50:44 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:32:14.911 15:50:44 -- target/dif.sh@56 -- # cat 00:32:14.911 15:50:44 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:14.911 15:50:44 -- common/autotest_common.sh@1325 -- # local sanitizers 00:32:14.911 15:50:44 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:14.911 15:50:44 -- common/autotest_common.sh@1327 -- # shift 00:32:14.911 15:50:44 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:32:14.911 15:50:44 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:32:14.911 15:50:44 -- nvmf/common.sh@543 -- # cat 00:32:14.911 15:50:44 -- target/dif.sh@72 -- # (( file = 1 )) 00:32:14.911 15:50:44 -- target/dif.sh@72 -- # (( file <= files )) 00:32:14.911 15:50:44 -- target/dif.sh@73 -- # cat 00:32:14.911 15:50:44 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:14.911 15:50:44 -- common/autotest_common.sh@1331 -- # grep libasan 00:32:14.911 15:50:44 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:32:14.911 15:50:44 -- target/dif.sh@72 -- # (( file++ )) 00:32:14.911 15:50:44 -- target/dif.sh@72 -- # (( file <= files )) 00:32:14.911 15:50:44 -- target/dif.sh@73 -- # cat 00:32:14.911 15:50:44 -- target/dif.sh@72 -- # (( file++ )) 00:32:14.911 15:50:44 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:32:14.911 15:50:44 -- target/dif.sh@72 -- # (( file <= files )) 00:32:14.911 15:50:44 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:32:14.911 { 00:32:14.911 "params": { 00:32:14.911 "name": "Nvme$subsystem", 00:32:14.911 "trtype": "$TEST_TRANSPORT", 00:32:14.911 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:14.911 "adrfam": "ipv4", 00:32:14.911 "trsvcid": "$NVMF_PORT", 00:32:14.911 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:14.911 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:14.911 "hdgst": ${hdgst:-false}, 00:32:14.911 "ddgst": ${ddgst:-false} 00:32:14.911 }, 00:32:14.911 "method": "bdev_nvme_attach_controller" 00:32:14.911 } 00:32:14.911 EOF 00:32:14.911 )") 00:32:14.911 15:50:44 -- nvmf/common.sh@543 -- # cat 00:32:14.911 15:50:44 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:32:14.911 15:50:44 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:32:14.911 { 00:32:14.911 "params": { 00:32:14.911 "name": "Nvme$subsystem", 00:32:14.911 "trtype": "$TEST_TRANSPORT", 00:32:14.911 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:14.911 "adrfam": "ipv4", 00:32:14.911 "trsvcid": "$NVMF_PORT", 00:32:14.911 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:14.911 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:14.911 "hdgst": ${hdgst:-false}, 00:32:14.911 "ddgst": ${ddgst:-false} 00:32:14.911 }, 00:32:14.911 "method": "bdev_nvme_attach_controller" 00:32:14.911 } 00:32:14.911 EOF 00:32:14.911 )") 00:32:14.911 15:50:44 -- nvmf/common.sh@543 -- # cat 00:32:14.911 15:50:44 -- nvmf/common.sh@545 -- # jq . 00:32:14.911 15:50:44 -- nvmf/common.sh@546 -- # IFS=, 00:32:14.911 15:50:44 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:32:14.911 "params": { 00:32:14.911 "name": "Nvme0", 00:32:14.911 "trtype": "tcp", 00:32:14.911 "traddr": "10.0.0.2", 00:32:14.911 "adrfam": "ipv4", 00:32:14.911 "trsvcid": "4420", 00:32:14.911 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:14.911 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:14.911 "hdgst": false, 00:32:14.911 "ddgst": false 00:32:14.911 }, 00:32:14.911 "method": "bdev_nvme_attach_controller" 00:32:14.911 },{ 00:32:14.911 "params": { 00:32:14.911 "name": "Nvme1", 00:32:14.911 "trtype": "tcp", 00:32:14.911 "traddr": "10.0.0.2", 00:32:14.911 "adrfam": "ipv4", 00:32:14.912 "trsvcid": "4420", 00:32:14.912 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:14.912 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:14.912 "hdgst": false, 00:32:14.912 "ddgst": false 00:32:14.912 }, 00:32:14.912 "method": "bdev_nvme_attach_controller" 00:32:14.912 },{ 00:32:14.912 "params": { 00:32:14.912 "name": "Nvme2", 00:32:14.912 "trtype": "tcp", 00:32:14.912 "traddr": "10.0.0.2", 00:32:14.912 "adrfam": "ipv4", 00:32:14.912 "trsvcid": "4420", 00:32:14.912 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:32:14.912 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:32:14.912 "hdgst": false, 00:32:14.912 "ddgst": false 00:32:14.912 }, 00:32:14.912 "method": "bdev_nvme_attach_controller" 00:32:14.912 }' 00:32:14.912 15:50:44 -- common/autotest_common.sh@1331 -- # asan_lib= 00:32:14.912 15:50:44 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:32:14.912 15:50:44 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:32:14.912 15:50:44 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:14.912 15:50:44 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:32:14.912 15:50:44 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:32:14.912 15:50:44 -- common/autotest_common.sh@1331 -- # asan_lib= 00:32:14.912 15:50:44 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:32:14.912 15:50:44 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:32:14.912 15:50:44 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:14.912 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:14.912 ... 00:32:14.912 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:14.912 ... 00:32:14.912 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:32:14.912 ... 00:32:14.912 fio-3.35 00:32:14.912 Starting 24 threads 00:32:27.104 00:32:27.104 filename0: (groupid=0, jobs=1): err= 0: pid=90736: Fri Apr 26 15:50:55 2024 00:32:27.104 read: IOPS=177, BW=708KiB/s (725kB/s)(7092KiB/10014msec) 00:32:27.104 slat (usec): min=3, max=4018, avg=13.26, stdev=95.28 00:32:27.104 clat (msec): min=34, max=168, avg=90.20, stdev=29.11 00:32:27.104 lat (msec): min=34, max=168, avg=90.21, stdev=29.11 00:32:27.104 clat percentiles (msec): 00:32:27.104 | 1.00th=[ 42], 5.00th=[ 49], 10.00th=[ 56], 20.00th=[ 65], 00:32:27.104 | 30.00th=[ 72], 40.00th=[ 77], 50.00th=[ 85], 60.00th=[ 94], 00:32:27.104 | 70.00th=[ 108], 80.00th=[ 118], 90.00th=[ 132], 95.00th=[ 148], 00:32:27.104 | 99.00th=[ 161], 99.50th=[ 169], 99.90th=[ 169], 99.95th=[ 169], 00:32:27.104 | 99.99th=[ 169] 00:32:27.104 bw ( KiB/s): min= 408, max= 1072, per=3.93%, avg=706.25, stdev=168.18, samples=20 00:32:27.104 iops : min= 102, max= 268, avg=176.50, stdev=42.10, samples=20 00:32:27.104 lat (msec) : 50=5.81%, 100=60.91%, 250=33.28% 00:32:27.104 cpu : usr=39.11%, sys=1.05%, ctx=1054, majf=0, minf=9 00:32:27.104 IO depths : 1=1.0%, 2=2.5%, 4=9.1%, 8=74.2%, 16=13.2%, 32=0.0%, >=64=0.0% 00:32:27.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.104 complete : 0=0.0%, 4=90.2%, 8=5.8%, 16=4.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.104 issued rwts: total=1773,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:27.104 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:27.104 filename0: (groupid=0, jobs=1): err= 0: pid=90737: Fri Apr 26 15:50:55 2024 00:32:27.104 read: IOPS=163, BW=655KiB/s (670kB/s)(6556KiB/10013msec) 00:32:27.104 slat (usec): min=4, max=8022, avg=23.40, stdev=305.85 00:32:27.104 clat (msec): min=17, max=191, avg=97.53, stdev=27.56 00:32:27.104 lat (msec): min=17, max=191, avg=97.55, stdev=27.56 00:32:27.104 clat percentiles (msec): 00:32:27.104 | 1.00th=[ 40], 5.00th=[ 57], 10.00th=[ 62], 20.00th=[ 73], 00:32:27.104 | 30.00th=[ 82], 40.00th=[ 87], 50.00th=[ 99], 60.00th=[ 107], 00:32:27.104 | 70.00th=[ 111], 80.00th=[ 121], 90.00th=[ 134], 95.00th=[ 144], 00:32:27.104 | 99.00th=[ 169], 99.50th=[ 190], 99.90th=[ 192], 99.95th=[ 192], 00:32:27.104 | 99.99th=[ 192] 00:32:27.104 bw ( KiB/s): min= 512, max= 1000, per=3.62%, avg=651.84, stdev=148.02, samples=19 00:32:27.104 iops : min= 128, max= 250, avg=162.95, stdev=36.99, samples=19 00:32:27.104 lat (msec) : 20=0.61%, 50=2.99%, 100=48.38%, 250=48.02% 00:32:27.104 cpu : usr=38.18%, sys=1.04%, ctx=1081, majf=0, minf=9 00:32:27.104 IO depths : 1=3.3%, 2=7.1%, 4=16.8%, 8=63.3%, 16=9.5%, 32=0.0%, >=64=0.0% 00:32:27.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.104 complete : 0=0.0%, 4=92.0%, 8=2.6%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.104 issued rwts: total=1639,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:27.104 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:27.104 filename0: (groupid=0, jobs=1): err= 0: pid=90738: Fri Apr 26 15:50:55 2024 00:32:27.104 read: IOPS=168, BW=674KiB/s (690kB/s)(6744KiB/10012msec) 00:32:27.104 slat (usec): min=4, max=4019, avg=13.57, stdev=100.98 00:32:27.104 clat (msec): min=34, max=175, avg=94.92, stdev=22.99 00:32:27.104 lat (msec): min=34, max=175, avg=94.93, stdev=22.99 00:32:27.104 clat percentiles (msec): 00:32:27.104 | 1.00th=[ 48], 5.00th=[ 63], 10.00th=[ 71], 20.00th=[ 75], 00:32:27.104 | 30.00th=[ 82], 40.00th=[ 85], 50.00th=[ 94], 60.00th=[ 97], 00:32:27.104 | 70.00th=[ 108], 80.00th=[ 114], 90.00th=[ 122], 95.00th=[ 140], 00:32:27.104 | 99.00th=[ 163], 99.50th=[ 174], 99.90th=[ 176], 99.95th=[ 176], 00:32:27.104 | 99.99th=[ 176] 00:32:27.104 bw ( KiB/s): min= 424, max= 896, per=3.71%, avg=667.55, stdev=112.08, samples=20 00:32:27.104 iops : min= 106, max= 224, avg=166.85, stdev=28.03, samples=20 00:32:27.104 lat (msec) : 50=2.08%, 100=60.79%, 250=37.13% 00:32:27.104 cpu : usr=41.23%, sys=1.06%, ctx=1428, majf=0, minf=9 00:32:27.104 IO depths : 1=3.6%, 2=7.5%, 4=17.3%, 8=62.4%, 16=9.3%, 32=0.0%, >=64=0.0% 00:32:27.104 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.104 complete : 0=0.0%, 4=92.1%, 8=2.4%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.104 issued rwts: total=1686,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:27.104 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:27.104 filename0: (groupid=0, jobs=1): err= 0: pid=90739: Fri Apr 26 15:50:55 2024 00:32:27.104 read: IOPS=194, BW=778KiB/s (796kB/s)(7824KiB/10059msec) 00:32:27.104 slat (usec): min=4, max=8024, avg=18.62, stdev=256.15 00:32:27.104 clat (msec): min=6, max=187, avg=82.08, stdev=29.97 00:32:27.104 lat (msec): min=6, max=187, avg=82.09, stdev=29.98 00:32:27.104 clat percentiles (msec): 00:32:27.104 | 1.00th=[ 8], 5.00th=[ 47], 10.00th=[ 48], 20.00th=[ 58], 00:32:27.104 | 30.00th=[ 61], 40.00th=[ 71], 50.00th=[ 75], 60.00th=[ 85], 00:32:27.104 | 70.00th=[ 96], 80.00th=[ 109], 90.00th=[ 120], 95.00th=[ 144], 00:32:27.104 | 99.00th=[ 157], 99.50th=[ 169], 99.90th=[ 180], 99.95th=[ 188], 00:32:27.104 | 99.99th=[ 188] 00:32:27.105 bw ( KiB/s): min= 512, max= 1026, per=4.32%, avg=776.10, stdev=175.34, samples=20 00:32:27.105 iops : min= 128, max= 256, avg=194.00, stdev=43.80, samples=20 00:32:27.105 lat (msec) : 10=1.64%, 50=11.15%, 100=60.07%, 250=27.15% 00:32:27.105 cpu : usr=32.35%, sys=0.86%, ctx=906, majf=0, minf=9 00:32:27.105 IO depths : 1=1.2%, 2=2.4%, 4=10.5%, 8=73.8%, 16=12.2%, 32=0.0%, >=64=0.0% 00:32:27.105 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.105 complete : 0=0.0%, 4=89.7%, 8=5.6%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.105 issued rwts: total=1956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:27.105 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:27.105 filename0: (groupid=0, jobs=1): err= 0: pid=90740: Fri Apr 26 15:50:55 2024 00:32:27.105 read: IOPS=215, BW=864KiB/s (884kB/s)(8704KiB/10078msec) 00:32:27.105 slat (usec): min=4, max=8022, avg=18.01, stdev=242.80 00:32:27.105 clat (usec): min=1569, max=154763, avg=73972.82, stdev=25899.90 00:32:27.105 lat (usec): min=1578, max=154777, avg=73990.83, stdev=25902.44 00:32:27.105 clat percentiles (msec): 00:32:27.105 | 1.00th=[ 3], 5.00th=[ 37], 10.00th=[ 48], 20.00th=[ 57], 00:32:27.105 | 30.00th=[ 61], 40.00th=[ 69], 50.00th=[ 72], 60.00th=[ 80], 00:32:27.105 | 70.00th=[ 85], 80.00th=[ 96], 90.00th=[ 108], 95.00th=[ 121], 00:32:27.105 | 99.00th=[ 138], 99.50th=[ 144], 99.90th=[ 155], 99.95th=[ 155], 00:32:27.105 | 99.99th=[ 155] 00:32:27.105 bw ( KiB/s): min= 600, max= 1657, per=4.80%, avg=863.55, stdev=223.98, samples=20 00:32:27.105 iops : min= 150, max= 414, avg=215.85, stdev=55.96, samples=20 00:32:27.105 lat (msec) : 2=0.74%, 4=0.74%, 10=2.21%, 50=12.68%, 100=67.83% 00:32:27.105 lat (msec) : 250=15.81% 00:32:27.105 cpu : usr=34.30%, sys=0.75%, ctx=918, majf=0, minf=0 00:32:27.105 IO depths : 1=0.9%, 2=2.3%, 4=10.2%, 8=74.3%, 16=12.4%, 32=0.0%, >=64=0.0% 00:32:27.105 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.105 complete : 0=0.0%, 4=89.9%, 8=5.3%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.105 issued rwts: total=2176,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:27.105 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:27.105 filename0: (groupid=0, jobs=1): err= 0: pid=90741: Fri Apr 26 15:50:55 2024 00:32:27.105 read: IOPS=190, BW=762KiB/s (780kB/s)(7660KiB/10052msec) 00:32:27.105 slat (usec): min=6, max=8025, avg=17.78, stdev=205.06 00:32:27.105 clat (msec): min=13, max=156, avg=83.86, stdev=25.59 00:32:27.105 lat (msec): min=13, max=156, avg=83.87, stdev=25.59 00:32:27.105 clat percentiles (msec): 00:32:27.105 | 1.00th=[ 33], 5.00th=[ 48], 10.00th=[ 57], 20.00th=[ 61], 00:32:27.105 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 83], 60.00th=[ 85], 00:32:27.105 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 120], 95.00th=[ 132], 00:32:27.105 | 99.00th=[ 157], 99.50th=[ 157], 99.90th=[ 157], 99.95th=[ 157], 00:32:27.105 | 99.99th=[ 157] 00:32:27.105 bw ( KiB/s): min= 552, max= 1024, per=4.22%, avg=759.60, stdev=118.79, samples=20 00:32:27.105 iops : min= 138, max= 256, avg=189.90, stdev=29.70, samples=20 00:32:27.105 lat (msec) : 20=0.84%, 50=7.78%, 100=66.89%, 250=24.49% 00:32:27.105 cpu : usr=33.42%, sys=0.91%, ctx=881, majf=0, minf=9 00:32:27.105 IO depths : 1=1.4%, 2=3.0%, 4=10.5%, 8=73.1%, 16=12.0%, 32=0.0%, >=64=0.0% 00:32:27.105 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.105 complete : 0=0.0%, 4=90.0%, 8=5.4%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.105 issued rwts: total=1915,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:27.105 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:27.105 filename0: (groupid=0, jobs=1): err= 0: pid=90742: Fri Apr 26 15:50:55 2024 00:32:27.105 read: IOPS=195, BW=781KiB/s (799kB/s)(7824KiB/10021msec) 00:32:27.105 slat (usec): min=4, max=1913, avg=11.70, stdev=43.28 00:32:27.105 clat (msec): min=31, max=155, avg=81.86, stdev=24.84 00:32:27.105 lat (msec): min=31, max=155, avg=81.87, stdev=24.84 00:32:27.105 clat percentiles (msec): 00:32:27.105 | 1.00th=[ 40], 5.00th=[ 48], 10.00th=[ 53], 20.00th=[ 59], 00:32:27.105 | 30.00th=[ 67], 40.00th=[ 72], 50.00th=[ 77], 60.00th=[ 85], 00:32:27.105 | 70.00th=[ 95], 80.00th=[ 109], 90.00th=[ 114], 95.00th=[ 123], 00:32:27.105 | 99.00th=[ 148], 99.50th=[ 153], 99.90th=[ 157], 99.95th=[ 157], 00:32:27.105 | 99.99th=[ 157] 00:32:27.105 bw ( KiB/s): min= 507, max= 1072, per=4.31%, avg=775.65, stdev=166.60, samples=20 00:32:27.105 iops : min= 126, max= 268, avg=193.85, stdev=41.74, samples=20 00:32:27.105 lat (msec) : 50=7.26%, 100=67.33%, 250=25.41% 00:32:27.105 cpu : usr=41.74%, sys=1.25%, ctx=1539, majf=0, minf=9 00:32:27.105 IO depths : 1=1.2%, 2=2.6%, 4=8.8%, 8=74.8%, 16=12.7%, 32=0.0%, >=64=0.0% 00:32:27.105 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.105 complete : 0=0.0%, 4=89.9%, 8=5.8%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.105 issued rwts: total=1956,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:27.105 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:27.105 filename0: (groupid=0, jobs=1): err= 0: pid=90743: Fri Apr 26 15:50:55 2024 00:32:27.105 read: IOPS=164, BW=659KiB/s (675kB/s)(6600KiB/10015msec) 00:32:27.105 slat (usec): min=3, max=4020, avg=17.90, stdev=139.94 00:32:27.105 clat (msec): min=24, max=170, avg=96.92, stdev=24.88 00:32:27.105 lat (msec): min=24, max=170, avg=96.94, stdev=24.88 00:32:27.105 clat percentiles (msec): 00:32:27.105 | 1.00th=[ 48], 5.00th=[ 57], 10.00th=[ 70], 20.00th=[ 75], 00:32:27.105 | 30.00th=[ 81], 40.00th=[ 90], 50.00th=[ 99], 60.00th=[ 105], 00:32:27.105 | 70.00th=[ 111], 80.00th=[ 116], 90.00th=[ 127], 95.00th=[ 140], 00:32:27.105 | 99.00th=[ 159], 99.50th=[ 165], 99.90th=[ 171], 99.95th=[ 171], 00:32:27.105 | 99.99th=[ 171] 00:32:27.105 bw ( KiB/s): min= 507, max= 896, per=3.66%, avg=657.40, stdev=113.61, samples=20 00:32:27.105 iops : min= 126, max= 224, avg=164.30, stdev=28.46, samples=20 00:32:27.105 lat (msec) : 50=3.33%, 100=50.42%, 250=46.24% 00:32:27.105 cpu : usr=43.94%, sys=1.22%, ctx=1543, majf=0, minf=9 00:32:27.105 IO depths : 1=2.5%, 2=5.3%, 4=14.3%, 8=66.3%, 16=11.6%, 32=0.0%, >=64=0.0% 00:32:27.105 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.105 complete : 0=0.0%, 4=91.6%, 8=4.2%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.105 issued rwts: total=1650,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:27.105 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:27.105 filename1: (groupid=0, jobs=1): err= 0: pid=90744: Fri Apr 26 15:50:55 2024 00:32:27.105 read: IOPS=209, BW=837KiB/s (857kB/s)(8404KiB/10040msec) 00:32:27.105 slat (usec): min=5, max=8022, avg=18.51, stdev=247.33 00:32:27.105 clat (msec): min=33, max=153, avg=76.30, stdev=22.41 00:32:27.105 lat (msec): min=33, max=154, avg=76.32, stdev=22.41 00:32:27.105 clat percentiles (msec): 00:32:27.105 | 1.00th=[ 37], 5.00th=[ 47], 10.00th=[ 50], 20.00th=[ 57], 00:32:27.105 | 30.00th=[ 61], 40.00th=[ 70], 50.00th=[ 73], 60.00th=[ 80], 00:32:27.105 | 70.00th=[ 85], 80.00th=[ 93], 90.00th=[ 108], 95.00th=[ 121], 00:32:27.105 | 99.00th=[ 133], 99.50th=[ 150], 99.90th=[ 155], 99.95th=[ 155], 00:32:27.105 | 99.99th=[ 155] 00:32:27.105 bw ( KiB/s): min= 634, max= 1088, per=4.63%, avg=833.70, stdev=137.11, samples=20 00:32:27.105 iops : min= 158, max= 272, avg=208.40, stdev=34.31, samples=20 00:32:27.105 lat (msec) : 50=10.85%, 100=73.16%, 250=15.99% 00:32:27.105 cpu : usr=38.65%, sys=1.17%, ctx=1095, majf=0, minf=9 00:32:27.105 IO depths : 1=0.3%, 2=0.9%, 4=7.0%, 8=78.3%, 16=13.4%, 32=0.0%, >=64=0.0% 00:32:27.105 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.105 complete : 0=0.0%, 4=89.4%, 8=6.3%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.105 issued rwts: total=2101,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:27.105 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:27.105 filename1: (groupid=0, jobs=1): err= 0: pid=90745: Fri Apr 26 15:50:55 2024 00:32:27.105 read: IOPS=216, BW=868KiB/s (889kB/s)(8728KiB/10057msec) 00:32:27.105 slat (usec): min=4, max=8037, avg=14.33, stdev=171.89 00:32:27.105 clat (msec): min=16, max=143, avg=73.42, stdev=23.06 00:32:27.105 lat (msec): min=16, max=143, avg=73.44, stdev=23.05 00:32:27.105 clat percentiles (msec): 00:32:27.105 | 1.00th=[ 24], 5.00th=[ 42], 10.00th=[ 48], 20.00th=[ 55], 00:32:27.105 | 30.00th=[ 59], 40.00th=[ 65], 50.00th=[ 72], 60.00th=[ 73], 00:32:27.105 | 70.00th=[ 81], 80.00th=[ 95], 90.00th=[ 107], 95.00th=[ 118], 00:32:27.105 | 99.00th=[ 136], 99.50th=[ 138], 99.90th=[ 144], 99.95th=[ 144], 00:32:27.105 | 99.99th=[ 144] 00:32:27.105 bw ( KiB/s): min= 688, max= 1120, per=4.83%, avg=868.45, stdev=141.75, samples=20 00:32:27.105 iops : min= 172, max= 280, avg=217.10, stdev=35.42, samples=20 00:32:27.105 lat (msec) : 20=0.73%, 50=13.02%, 100=71.49%, 250=14.76% 00:32:27.105 cpu : usr=43.81%, sys=1.10%, ctx=1276, majf=0, minf=9 00:32:27.105 IO depths : 1=1.2%, 2=2.4%, 4=8.9%, 8=75.4%, 16=12.1%, 32=0.0%, >=64=0.0% 00:32:27.105 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.105 complete : 0=0.0%, 4=89.6%, 8=5.6%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.105 issued rwts: total=2182,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:27.105 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:27.105 filename1: (groupid=0, jobs=1): err= 0: pid=90746: Fri Apr 26 15:50:55 2024 00:32:27.105 read: IOPS=200, BW=804KiB/s (823kB/s)(8072KiB/10044msec) 00:32:27.105 slat (usec): min=5, max=4028, avg=15.14, stdev=126.47 00:32:27.105 clat (msec): min=32, max=158, avg=79.46, stdev=22.04 00:32:27.105 lat (msec): min=32, max=158, avg=79.48, stdev=22.03 00:32:27.105 clat percentiles (msec): 00:32:27.105 | 1.00th=[ 44], 5.00th=[ 49], 10.00th=[ 54], 20.00th=[ 62], 00:32:27.105 | 30.00th=[ 68], 40.00th=[ 72], 50.00th=[ 75], 60.00th=[ 81], 00:32:27.105 | 70.00th=[ 85], 80.00th=[ 95], 90.00th=[ 111], 95.00th=[ 121], 00:32:27.105 | 99.00th=[ 150], 99.50th=[ 153], 99.90th=[ 159], 99.95th=[ 159], 00:32:27.105 | 99.99th=[ 159] 00:32:27.105 bw ( KiB/s): min= 640, max= 1104, per=4.45%, avg=800.10, stdev=116.58, samples=20 00:32:27.105 iops : min= 160, max= 276, avg=200.00, stdev=29.15, samples=20 00:32:27.105 lat (msec) : 50=7.28%, 100=76.36%, 250=16.35% 00:32:27.105 cpu : usr=42.46%, sys=1.22%, ctx=1355, majf=0, minf=9 00:32:27.105 IO depths : 1=1.2%, 2=2.6%, 4=9.8%, 8=74.2%, 16=12.2%, 32=0.0%, >=64=0.0% 00:32:27.105 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.105 complete : 0=0.0%, 4=90.0%, 8=5.3%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.105 issued rwts: total=2018,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:27.105 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:27.105 filename1: (groupid=0, jobs=1): err= 0: pid=90747: Fri Apr 26 15:50:55 2024 00:32:27.105 read: IOPS=212, BW=849KiB/s (869kB/s)(8524KiB/10043msec) 00:32:27.105 slat (usec): min=3, max=4023, avg=18.44, stdev=164.86 00:32:27.105 clat (msec): min=31, max=140, avg=75.30, stdev=20.18 00:32:27.105 lat (msec): min=31, max=140, avg=75.32, stdev=20.18 00:32:27.106 clat percentiles (msec): 00:32:27.106 | 1.00th=[ 40], 5.00th=[ 47], 10.00th=[ 51], 20.00th=[ 56], 00:32:27.106 | 30.00th=[ 66], 40.00th=[ 70], 50.00th=[ 75], 60.00th=[ 79], 00:32:27.106 | 70.00th=[ 83], 80.00th=[ 88], 90.00th=[ 105], 95.00th=[ 114], 00:32:27.106 | 99.00th=[ 129], 99.50th=[ 129], 99.90th=[ 131], 99.95th=[ 131], 00:32:27.106 | 99.99th=[ 140] 00:32:27.106 bw ( KiB/s): min= 640, max= 1072, per=4.71%, avg=846.00, stdev=121.35, samples=20 00:32:27.106 iops : min= 160, max= 268, avg=211.50, stdev=30.34, samples=20 00:32:27.106 lat (msec) : 50=10.61%, 100=75.74%, 250=13.66% 00:32:27.106 cpu : usr=45.49%, sys=1.28%, ctx=2037, majf=0, minf=9 00:32:27.106 IO depths : 1=2.1%, 2=4.5%, 4=12.9%, 8=69.8%, 16=10.7%, 32=0.0%, >=64=0.0% 00:32:27.106 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.106 complete : 0=0.0%, 4=90.7%, 8=4.0%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.106 issued rwts: total=2131,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:27.106 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:27.106 filename1: (groupid=0, jobs=1): err= 0: pid=90748: Fri Apr 26 15:50:55 2024 00:32:27.106 read: IOPS=199, BW=799KiB/s (818kB/s)(8020KiB/10039msec) 00:32:27.106 slat (usec): min=7, max=8026, avg=30.32, stdev=399.75 00:32:27.106 clat (msec): min=31, max=190, avg=79.92, stdev=24.36 00:32:27.106 lat (msec): min=31, max=190, avg=79.95, stdev=24.37 00:32:27.106 clat percentiles (msec): 00:32:27.106 | 1.00th=[ 39], 5.00th=[ 48], 10.00th=[ 53], 20.00th=[ 61], 00:32:27.106 | 30.00th=[ 64], 40.00th=[ 71], 50.00th=[ 75], 60.00th=[ 84], 00:32:27.106 | 70.00th=[ 88], 80.00th=[ 97], 90.00th=[ 108], 95.00th=[ 131], 00:32:27.106 | 99.00th=[ 161], 99.50th=[ 167], 99.90th=[ 169], 99.95th=[ 169], 00:32:27.106 | 99.99th=[ 190] 00:32:27.106 bw ( KiB/s): min= 424, max= 1120, per=4.42%, avg=795.00, stdev=149.77, samples=20 00:32:27.106 iops : min= 106, max= 280, avg=198.70, stdev=37.49, samples=20 00:32:27.106 lat (msec) : 50=8.33%, 100=73.02%, 250=18.65% 00:32:27.106 cpu : usr=32.35%, sys=0.81%, ctx=907, majf=0, minf=9 00:32:27.106 IO depths : 1=0.5%, 2=1.1%, 4=7.1%, 8=77.9%, 16=13.3%, 32=0.0%, >=64=0.0% 00:32:27.106 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.106 complete : 0=0.0%, 4=89.0%, 8=6.7%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.106 issued rwts: total=2005,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:27.106 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:27.106 filename1: (groupid=0, jobs=1): err= 0: pid=90749: Fri Apr 26 15:50:55 2024 00:32:27.106 read: IOPS=182, BW=730KiB/s (748kB/s)(7312KiB/10011msec) 00:32:27.106 slat (usec): min=4, max=4053, avg=12.68, stdev=94.63 00:32:27.106 clat (msec): min=23, max=201, avg=87.50, stdev=27.99 00:32:27.106 lat (msec): min=23, max=201, avg=87.52, stdev=27.99 00:32:27.106 clat percentiles (msec): 00:32:27.106 | 1.00th=[ 45], 5.00th=[ 48], 10.00th=[ 51], 20.00th=[ 61], 00:32:27.106 | 30.00th=[ 72], 40.00th=[ 75], 50.00th=[ 85], 60.00th=[ 96], 00:32:27.106 | 70.00th=[ 108], 80.00th=[ 109], 90.00th=[ 121], 95.00th=[ 144], 00:32:27.106 | 99.00th=[ 155], 99.50th=[ 178], 99.90th=[ 203], 99.95th=[ 203], 00:32:27.106 | 99.99th=[ 203] 00:32:27.106 bw ( KiB/s): min= 472, max= 992, per=4.03%, avg=724.85, stdev=153.36, samples=20 00:32:27.106 iops : min= 118, max= 248, avg=181.20, stdev=38.36, samples=20 00:32:27.106 lat (msec) : 50=9.14%, 100=57.82%, 250=33.04% 00:32:27.106 cpu : usr=34.14%, sys=0.91%, ctx=921, majf=0, minf=9 00:32:27.106 IO depths : 1=1.9%, 2=3.8%, 4=11.3%, 8=71.4%, 16=11.6%, 32=0.0%, >=64=0.0% 00:32:27.106 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.106 complete : 0=0.0%, 4=90.2%, 8=5.1%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.106 issued rwts: total=1828,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:27.106 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:27.106 filename1: (groupid=0, jobs=1): err= 0: pid=90750: Fri Apr 26 15:50:55 2024 00:32:27.106 read: IOPS=167, BW=670KiB/s (686kB/s)(6716KiB/10019msec) 00:32:27.106 slat (usec): min=4, max=8024, avg=18.39, stdev=218.93 00:32:27.106 clat (msec): min=39, max=196, avg=95.27, stdev=25.75 00:32:27.106 lat (msec): min=39, max=196, avg=95.29, stdev=25.75 00:32:27.106 clat percentiles (msec): 00:32:27.106 | 1.00th=[ 48], 5.00th=[ 56], 10.00th=[ 64], 20.00th=[ 72], 00:32:27.106 | 30.00th=[ 83], 40.00th=[ 86], 50.00th=[ 96], 60.00th=[ 103], 00:32:27.106 | 70.00th=[ 108], 80.00th=[ 112], 90.00th=[ 124], 95.00th=[ 144], 00:32:27.106 | 99.00th=[ 176], 99.50th=[ 188], 99.90th=[ 197], 99.95th=[ 197], 00:32:27.106 | 99.99th=[ 197] 00:32:27.106 bw ( KiB/s): min= 472, max= 944, per=3.72%, avg=669.00, stdev=136.14, samples=20 00:32:27.106 iops : min= 118, max= 236, avg=167.20, stdev=34.09, samples=20 00:32:27.106 lat (msec) : 50=3.87%, 100=55.45%, 250=40.68% 00:32:27.106 cpu : usr=38.28%, sys=1.04%, ctx=1050, majf=0, minf=10 00:32:27.106 IO depths : 1=2.6%, 2=5.8%, 4=15.5%, 8=65.7%, 16=10.4%, 32=0.0%, >=64=0.0% 00:32:27.106 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.106 complete : 0=0.0%, 4=91.4%, 8=3.5%, 16=5.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.106 issued rwts: total=1679,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:27.106 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:27.106 filename1: (groupid=0, jobs=1): err= 0: pid=90751: Fri Apr 26 15:50:55 2024 00:32:27.106 read: IOPS=188, BW=756KiB/s (774kB/s)(7596KiB/10049msec) 00:32:27.106 slat (usec): min=4, max=4028, avg=15.16, stdev=130.36 00:32:27.106 clat (msec): min=41, max=193, avg=84.54, stdev=25.63 00:32:27.106 lat (msec): min=41, max=193, avg=84.55, stdev=25.62 00:32:27.106 clat percentiles (msec): 00:32:27.106 | 1.00th=[ 45], 5.00th=[ 48], 10.00th=[ 51], 20.00th=[ 61], 00:32:27.106 | 30.00th=[ 71], 40.00th=[ 74], 50.00th=[ 83], 60.00th=[ 86], 00:32:27.106 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 120], 95.00th=[ 132], 00:32:27.106 | 99.00th=[ 155], 99.50th=[ 157], 99.90th=[ 194], 99.95th=[ 194], 00:32:27.106 | 99.99th=[ 194] 00:32:27.106 bw ( KiB/s): min= 512, max= 1072, per=4.19%, avg=753.20, stdev=134.10, samples=20 00:32:27.106 iops : min= 128, max= 268, avg=188.30, stdev=33.52, samples=20 00:32:27.106 lat (msec) : 50=8.95%, 100=63.56%, 250=27.49% 00:32:27.106 cpu : usr=37.86%, sys=0.94%, ctx=1058, majf=0, minf=9 00:32:27.106 IO depths : 1=1.1%, 2=2.7%, 4=12.0%, 8=72.3%, 16=12.0%, 32=0.0%, >=64=0.0% 00:32:27.106 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.106 complete : 0=0.0%, 4=89.9%, 8=5.0%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.106 issued rwts: total=1899,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:27.106 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:27.106 filename2: (groupid=0, jobs=1): err= 0: pid=90752: Fri Apr 26 15:50:55 2024 00:32:27.106 read: IOPS=189, BW=759KiB/s (778kB/s)(7644KiB/10067msec) 00:32:27.106 slat (usec): min=3, max=8028, avg=15.30, stdev=183.46 00:32:27.106 clat (msec): min=8, max=161, avg=84.17, stdev=29.67 00:32:27.106 lat (msec): min=8, max=161, avg=84.19, stdev=29.67 00:32:27.106 clat percentiles (msec): 00:32:27.106 | 1.00th=[ 19], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 57], 00:32:27.106 | 30.00th=[ 65], 40.00th=[ 72], 50.00th=[ 83], 60.00th=[ 92], 00:32:27.106 | 70.00th=[ 101], 80.00th=[ 109], 90.00th=[ 124], 95.00th=[ 136], 00:32:27.106 | 99.00th=[ 161], 99.50th=[ 161], 99.90th=[ 163], 99.95th=[ 163], 00:32:27.106 | 99.99th=[ 163] 00:32:27.106 bw ( KiB/s): min= 464, max= 1080, per=4.21%, avg=757.75, stdev=203.45, samples=20 00:32:27.106 iops : min= 116, max= 270, avg=189.40, stdev=50.86, samples=20 00:32:27.106 lat (msec) : 10=0.84%, 20=0.84%, 50=10.05%, 100=59.03%, 250=29.25% 00:32:27.106 cpu : usr=36.38%, sys=0.94%, ctx=1028, majf=0, minf=9 00:32:27.106 IO depths : 1=1.2%, 2=2.4%, 4=8.4%, 8=75.6%, 16=12.5%, 32=0.0%, >=64=0.0% 00:32:27.106 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.106 complete : 0=0.0%, 4=90.0%, 8=5.6%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.106 issued rwts: total=1911,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:27.106 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:27.106 filename2: (groupid=0, jobs=1): err= 0: pid=90753: Fri Apr 26 15:50:55 2024 00:32:27.106 read: IOPS=184, BW=737KiB/s (754kB/s)(7384KiB/10025msec) 00:32:27.106 slat (usec): min=4, max=8029, avg=14.69, stdev=186.68 00:32:27.106 clat (msec): min=34, max=167, avg=86.74, stdev=24.13 00:32:27.106 lat (msec): min=34, max=167, avg=86.76, stdev=24.13 00:32:27.106 clat percentiles (msec): 00:32:27.106 | 1.00th=[ 46], 5.00th=[ 52], 10.00th=[ 61], 20.00th=[ 69], 00:32:27.106 | 30.00th=[ 72], 40.00th=[ 77], 50.00th=[ 84], 60.00th=[ 87], 00:32:27.106 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 120], 95.00th=[ 132], 00:32:27.106 | 99.00th=[ 157], 99.50th=[ 167], 99.90th=[ 169], 99.95th=[ 169], 00:32:27.106 | 99.99th=[ 169] 00:32:27.106 bw ( KiB/s): min= 560, max= 992, per=4.07%, avg=732.00, stdev=106.24, samples=20 00:32:27.106 iops : min= 140, max= 248, avg=183.00, stdev=26.56, samples=20 00:32:27.106 lat (msec) : 50=3.47%, 100=69.28%, 250=27.25% 00:32:27.106 cpu : usr=32.24%, sys=0.93%, ctx=898, majf=0, minf=9 00:32:27.106 IO depths : 1=1.4%, 2=2.9%, 4=10.4%, 8=72.9%, 16=12.4%, 32=0.0%, >=64=0.0% 00:32:27.106 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.106 complete : 0=0.0%, 4=90.0%, 8=5.6%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.106 issued rwts: total=1846,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:27.106 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:27.106 filename2: (groupid=0, jobs=1): err= 0: pid=90754: Fri Apr 26 15:50:55 2024 00:32:27.106 read: IOPS=188, BW=756KiB/s (774kB/s)(7600KiB/10058msec) 00:32:27.106 slat (usec): min=4, max=8022, avg=19.26, stdev=259.89 00:32:27.106 clat (msec): min=26, max=167, avg=84.50, stdev=25.81 00:32:27.106 lat (msec): min=26, max=167, avg=84.52, stdev=25.81 00:32:27.106 clat percentiles (msec): 00:32:27.106 | 1.00th=[ 33], 5.00th=[ 48], 10.00th=[ 53], 20.00th=[ 61], 00:32:27.106 | 30.00th=[ 72], 40.00th=[ 73], 50.00th=[ 83], 60.00th=[ 85], 00:32:27.106 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 121], 95.00th=[ 132], 00:32:27.106 | 99.00th=[ 157], 99.50th=[ 157], 99.90th=[ 157], 99.95th=[ 167], 00:32:27.106 | 99.99th=[ 167] 00:32:27.106 bw ( KiB/s): min= 600, max= 1024, per=4.19%, avg=753.50, stdev=118.77, samples=20 00:32:27.106 iops : min= 150, max= 256, avg=188.35, stdev=29.71, samples=20 00:32:27.106 lat (msec) : 50=8.68%, 100=66.74%, 250=24.58% 00:32:27.106 cpu : usr=33.90%, sys=0.87%, ctx=903, majf=0, minf=9 00:32:27.106 IO depths : 1=1.0%, 2=2.1%, 4=9.7%, 8=74.7%, 16=12.6%, 32=0.0%, >=64=0.0% 00:32:27.106 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.106 complete : 0=0.0%, 4=89.4%, 8=6.0%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.106 issued rwts: total=1900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:27.107 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:27.107 filename2: (groupid=0, jobs=1): err= 0: pid=90755: Fri Apr 26 15:50:55 2024 00:32:27.107 read: IOPS=165, BW=662KiB/s (677kB/s)(6628KiB/10019msec) 00:32:27.107 slat (usec): min=6, max=8025, avg=15.55, stdev=196.94 00:32:27.107 clat (msec): min=37, max=169, avg=96.60, stdev=26.12 00:32:27.107 lat (msec): min=37, max=169, avg=96.62, stdev=26.12 00:32:27.107 clat percentiles (msec): 00:32:27.107 | 1.00th=[ 46], 5.00th=[ 59], 10.00th=[ 61], 20.00th=[ 72], 00:32:27.107 | 30.00th=[ 82], 40.00th=[ 86], 50.00th=[ 97], 60.00th=[ 108], 00:32:27.107 | 70.00th=[ 109], 80.00th=[ 115], 90.00th=[ 132], 95.00th=[ 144], 00:32:27.107 | 99.00th=[ 161], 99.50th=[ 169], 99.90th=[ 169], 99.95th=[ 169], 00:32:27.107 | 99.99th=[ 169] 00:32:27.107 bw ( KiB/s): min= 464, max= 912, per=3.66%, avg=658.35, stdev=116.02, samples=20 00:32:27.107 iops : min= 116, max= 228, avg=164.55, stdev=29.06, samples=20 00:32:27.107 lat (msec) : 50=2.11%, 100=49.85%, 250=48.04% 00:32:27.107 cpu : usr=35.39%, sys=0.93%, ctx=939, majf=0, minf=9 00:32:27.107 IO depths : 1=2.8%, 2=6.2%, 4=16.7%, 8=64.2%, 16=10.1%, 32=0.0%, >=64=0.0% 00:32:27.107 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.107 complete : 0=0.0%, 4=91.8%, 8=2.8%, 16=5.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.107 issued rwts: total=1657,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:27.107 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:27.107 filename2: (groupid=0, jobs=1): err= 0: pid=90756: Fri Apr 26 15:50:55 2024 00:32:27.107 read: IOPS=194, BW=778KiB/s (797kB/s)(7812KiB/10039msec) 00:32:27.107 slat (usec): min=5, max=4018, avg=12.98, stdev=90.81 00:32:27.107 clat (msec): min=36, max=195, avg=82.06, stdev=28.22 00:32:27.107 lat (msec): min=36, max=195, avg=82.07, stdev=28.23 00:32:27.107 clat percentiles (msec): 00:32:27.107 | 1.00th=[ 40], 5.00th=[ 46], 10.00th=[ 48], 20.00th=[ 56], 00:32:27.107 | 30.00th=[ 63], 40.00th=[ 72], 50.00th=[ 79], 60.00th=[ 86], 00:32:27.107 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 117], 95.00th=[ 133], 00:32:27.107 | 99.00th=[ 169], 99.50th=[ 188], 99.90th=[ 197], 99.95th=[ 197], 00:32:27.107 | 99.99th=[ 197] 00:32:27.107 bw ( KiB/s): min= 507, max= 1080, per=4.31%, avg=774.25, stdev=201.65, samples=20 00:32:27.107 iops : min= 126, max= 270, avg=193.50, stdev=50.48, samples=20 00:32:27.107 lat (msec) : 50=13.06%, 100=61.80%, 250=25.14% 00:32:27.107 cpu : usr=41.55%, sys=0.99%, ctx=1243, majf=0, minf=9 00:32:27.107 IO depths : 1=1.4%, 2=3.2%, 4=10.7%, 8=72.7%, 16=12.0%, 32=0.0%, >=64=0.0% 00:32:27.107 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.107 complete : 0=0.0%, 4=90.2%, 8=5.2%, 16=4.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.107 issued rwts: total=1953,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:27.107 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:27.107 filename2: (groupid=0, jobs=1): err= 0: pid=90757: Fri Apr 26 15:50:55 2024 00:32:27.107 read: IOPS=193, BW=775KiB/s (793kB/s)(7784KiB/10046msec) 00:32:27.107 slat (usec): min=4, max=8019, avg=20.96, stdev=240.54 00:32:27.107 clat (msec): min=33, max=161, avg=82.46, stdev=26.05 00:32:27.107 lat (msec): min=33, max=161, avg=82.48, stdev=26.06 00:32:27.107 clat percentiles (msec): 00:32:27.107 | 1.00th=[ 36], 5.00th=[ 48], 10.00th=[ 50], 20.00th=[ 59], 00:32:27.107 | 30.00th=[ 67], 40.00th=[ 73], 50.00th=[ 80], 60.00th=[ 85], 00:32:27.107 | 70.00th=[ 96], 80.00th=[ 107], 90.00th=[ 121], 95.00th=[ 132], 00:32:27.107 | 99.00th=[ 150], 99.50th=[ 153], 99.90th=[ 163], 99.95th=[ 163], 00:32:27.107 | 99.99th=[ 163] 00:32:27.107 bw ( KiB/s): min= 512, max= 968, per=4.29%, avg=771.50, stdev=126.08, samples=20 00:32:27.107 iops : min= 128, max= 242, avg=192.85, stdev=31.53, samples=20 00:32:27.107 lat (msec) : 50=10.79%, 100=65.11%, 250=24.10% 00:32:27.107 cpu : usr=41.98%, sys=1.05%, ctx=1492, majf=0, minf=9 00:32:27.107 IO depths : 1=1.4%, 2=3.2%, 4=10.0%, 8=73.2%, 16=12.2%, 32=0.0%, >=64=0.0% 00:32:27.107 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.107 complete : 0=0.0%, 4=89.9%, 8=5.6%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.107 issued rwts: total=1946,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:27.107 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:27.107 filename2: (groupid=0, jobs=1): err= 0: pid=90758: Fri Apr 26 15:50:55 2024 00:32:27.107 read: IOPS=161, BW=646KiB/s (661kB/s)(6464KiB/10007msec) 00:32:27.107 slat (usec): min=4, max=4018, avg=13.42, stdev=99.79 00:32:27.107 clat (msec): min=45, max=197, avg=98.97, stdev=23.55 00:32:27.107 lat (msec): min=45, max=197, avg=98.98, stdev=23.55 00:32:27.107 clat percentiles (msec): 00:32:27.107 | 1.00th=[ 56], 5.00th=[ 67], 10.00th=[ 72], 20.00th=[ 81], 00:32:27.107 | 30.00th=[ 83], 40.00th=[ 88], 50.00th=[ 100], 60.00th=[ 107], 00:32:27.107 | 70.00th=[ 113], 80.00th=[ 116], 90.00th=[ 127], 95.00th=[ 140], 00:32:27.107 | 99.00th=[ 167], 99.50th=[ 186], 99.90th=[ 197], 99.95th=[ 197], 00:32:27.107 | 99.99th=[ 197] 00:32:27.107 bw ( KiB/s): min= 384, max= 768, per=3.59%, avg=646.21, stdev=108.61, samples=19 00:32:27.107 iops : min= 96, max= 192, avg=161.53, stdev=27.15, samples=19 00:32:27.107 lat (msec) : 50=0.68%, 100=50.50%, 250=48.82% 00:32:27.107 cpu : usr=40.85%, sys=1.15%, ctx=1318, majf=0, minf=9 00:32:27.107 IO depths : 1=3.8%, 2=7.9%, 4=19.1%, 8=60.3%, 16=8.8%, 32=0.0%, >=64=0.0% 00:32:27.107 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.107 complete : 0=0.0%, 4=92.2%, 8=2.2%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.107 issued rwts: total=1616,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:27.107 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:27.107 filename2: (groupid=0, jobs=1): err= 0: pid=90759: Fri Apr 26 15:50:55 2024 00:32:27.107 read: IOPS=185, BW=742KiB/s (760kB/s)(7460KiB/10050msec) 00:32:27.107 slat (usec): min=3, max=8026, avg=15.03, stdev=185.65 00:32:27.107 clat (msec): min=35, max=179, avg=86.00, stdev=25.32 00:32:27.107 lat (msec): min=35, max=179, avg=86.02, stdev=25.32 00:32:27.107 clat percentiles (msec): 00:32:27.107 | 1.00th=[ 42], 5.00th=[ 48], 10.00th=[ 58], 20.00th=[ 64], 00:32:27.107 | 30.00th=[ 72], 40.00th=[ 74], 50.00th=[ 84], 60.00th=[ 85], 00:32:27.107 | 70.00th=[ 96], 80.00th=[ 108], 90.00th=[ 121], 95.00th=[ 133], 00:32:27.107 | 99.00th=[ 157], 99.50th=[ 163], 99.90th=[ 163], 99.95th=[ 180], 00:32:27.107 | 99.99th=[ 180] 00:32:27.107 bw ( KiB/s): min= 616, max= 912, per=4.11%, avg=739.60, stdev=100.88, samples=20 00:32:27.107 iops : min= 154, max= 228, avg=184.90, stdev=25.22, samples=20 00:32:27.107 lat (msec) : 50=6.60%, 100=68.10%, 250=25.31% 00:32:27.107 cpu : usr=34.19%, sys=0.98%, ctx=908, majf=0, minf=9 00:32:27.107 IO depths : 1=1.0%, 2=2.4%, 4=10.5%, 8=73.8%, 16=12.3%, 32=0.0%, >=64=0.0% 00:32:27.107 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.107 complete : 0=0.0%, 4=89.9%, 8=5.4%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:27.107 issued rwts: total=1865,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:27.107 latency : target=0, window=0, percentile=100.00%, depth=16 00:32:27.107 00:32:27.107 Run status group 0 (all jobs): 00:32:27.107 READ: bw=17.6MiB/s (18.4MB/s), 646KiB/s-868KiB/s (661kB/s-889kB/s), io=177MiB (185MB), run=10007-10078msec 00:32:27.107 15:50:55 -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:32:27.107 15:50:55 -- target/dif.sh@43 -- # local sub 00:32:27.107 15:50:55 -- target/dif.sh@45 -- # for sub in "$@" 00:32:27.107 15:50:55 -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:27.107 15:50:55 -- target/dif.sh@36 -- # local sub_id=0 00:32:27.107 15:50:55 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:27.107 15:50:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:27.107 15:50:55 -- common/autotest_common.sh@10 -- # set +x 00:32:27.107 15:50:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:27.107 15:50:55 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:27.107 15:50:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:27.107 15:50:55 -- common/autotest_common.sh@10 -- # set +x 00:32:27.107 15:50:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:27.107 15:50:55 -- target/dif.sh@45 -- # for sub in "$@" 00:32:27.107 15:50:55 -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:27.107 15:50:55 -- target/dif.sh@36 -- # local sub_id=1 00:32:27.107 15:50:55 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:27.107 15:50:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:27.107 15:50:55 -- common/autotest_common.sh@10 -- # set +x 00:32:27.107 15:50:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:27.107 15:50:55 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:27.107 15:50:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:27.107 15:50:55 -- common/autotest_common.sh@10 -- # set +x 00:32:27.107 15:50:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:27.107 15:50:55 -- target/dif.sh@45 -- # for sub in "$@" 00:32:27.107 15:50:55 -- target/dif.sh@46 -- # destroy_subsystem 2 00:32:27.107 15:50:55 -- target/dif.sh@36 -- # local sub_id=2 00:32:27.107 15:50:55 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:32:27.107 15:50:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:27.107 15:50:55 -- common/autotest_common.sh@10 -- # set +x 00:32:27.107 15:50:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:27.107 15:50:55 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:32:27.107 15:50:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:27.107 15:50:55 -- common/autotest_common.sh@10 -- # set +x 00:32:27.107 15:50:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:27.107 15:50:55 -- target/dif.sh@115 -- # NULL_DIF=1 00:32:27.107 15:50:55 -- target/dif.sh@115 -- # bs=8k,16k,128k 00:32:27.107 15:50:55 -- target/dif.sh@115 -- # numjobs=2 00:32:27.107 15:50:55 -- target/dif.sh@115 -- # iodepth=8 00:32:27.107 15:50:55 -- target/dif.sh@115 -- # runtime=5 00:32:27.107 15:50:55 -- target/dif.sh@115 -- # files=1 00:32:27.107 15:50:55 -- target/dif.sh@117 -- # create_subsystems 0 1 00:32:27.107 15:50:55 -- target/dif.sh@28 -- # local sub 00:32:27.107 15:50:55 -- target/dif.sh@30 -- # for sub in "$@" 00:32:27.107 15:50:55 -- target/dif.sh@31 -- # create_subsystem 0 00:32:27.107 15:50:55 -- target/dif.sh@18 -- # local sub_id=0 00:32:27.107 15:50:55 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:32:27.107 15:50:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:27.107 15:50:55 -- common/autotest_common.sh@10 -- # set +x 00:32:27.107 bdev_null0 00:32:27.107 15:50:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:27.107 15:50:55 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:27.107 15:50:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:27.107 15:50:55 -- common/autotest_common.sh@10 -- # set +x 00:32:27.108 15:50:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:27.108 15:50:55 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:27.108 15:50:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:27.108 15:50:55 -- common/autotest_common.sh@10 -- # set +x 00:32:27.108 15:50:55 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:27.108 15:50:55 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:27.108 15:50:55 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:27.108 15:50:55 -- common/autotest_common.sh@10 -- # set +x 00:32:27.108 [2024-04-26 15:50:55.998620] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:27.108 15:50:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:27.108 15:50:56 -- target/dif.sh@30 -- # for sub in "$@" 00:32:27.108 15:50:56 -- target/dif.sh@31 -- # create_subsystem 1 00:32:27.108 15:50:56 -- target/dif.sh@18 -- # local sub_id=1 00:32:27.108 15:50:56 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:32:27.108 15:50:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:27.108 15:50:56 -- common/autotest_common.sh@10 -- # set +x 00:32:27.108 bdev_null1 00:32:27.108 15:50:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:27.108 15:50:56 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:32:27.108 15:50:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:27.108 15:50:56 -- common/autotest_common.sh@10 -- # set +x 00:32:27.108 15:50:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:27.108 15:50:56 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:32:27.108 15:50:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:27.108 15:50:56 -- common/autotest_common.sh@10 -- # set +x 00:32:27.108 15:50:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:27.108 15:50:56 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:27.108 15:50:56 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:27.108 15:50:56 -- common/autotest_common.sh@10 -- # set +x 00:32:27.108 15:50:56 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:27.108 15:50:56 -- target/dif.sh@118 -- # fio /dev/fd/62 00:32:27.108 15:50:56 -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:32:27.108 15:50:56 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:32:27.108 15:50:56 -- nvmf/common.sh@521 -- # config=() 00:32:27.108 15:50:56 -- nvmf/common.sh@521 -- # local subsystem config 00:32:27.108 15:50:56 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:32:27.108 15:50:56 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:32:27.108 { 00:32:27.108 "params": { 00:32:27.108 "name": "Nvme$subsystem", 00:32:27.108 "trtype": "$TEST_TRANSPORT", 00:32:27.108 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:27.108 "adrfam": "ipv4", 00:32:27.108 "trsvcid": "$NVMF_PORT", 00:32:27.108 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:27.108 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:27.108 "hdgst": ${hdgst:-false}, 00:32:27.108 "ddgst": ${ddgst:-false} 00:32:27.108 }, 00:32:27.108 "method": "bdev_nvme_attach_controller" 00:32:27.108 } 00:32:27.108 EOF 00:32:27.108 )") 00:32:27.108 15:50:56 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:27.108 15:50:56 -- target/dif.sh@82 -- # gen_fio_conf 00:32:27.108 15:50:56 -- target/dif.sh@54 -- # local file 00:32:27.108 15:50:56 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:27.108 15:50:56 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:32:27.108 15:50:56 -- target/dif.sh@56 -- # cat 00:32:27.108 15:50:56 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:27.108 15:50:56 -- nvmf/common.sh@543 -- # cat 00:32:27.108 15:50:56 -- common/autotest_common.sh@1325 -- # local sanitizers 00:32:27.108 15:50:56 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:27.108 15:50:56 -- common/autotest_common.sh@1327 -- # shift 00:32:27.108 15:50:56 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:32:27.108 15:50:56 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:32:27.108 15:50:56 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:32:27.108 15:50:56 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:27.108 15:50:56 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:32:27.108 { 00:32:27.108 "params": { 00:32:27.108 "name": "Nvme$subsystem", 00:32:27.108 "trtype": "$TEST_TRANSPORT", 00:32:27.108 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:27.108 "adrfam": "ipv4", 00:32:27.108 "trsvcid": "$NVMF_PORT", 00:32:27.108 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:27.108 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:27.108 "hdgst": ${hdgst:-false}, 00:32:27.108 "ddgst": ${ddgst:-false} 00:32:27.108 }, 00:32:27.108 "method": "bdev_nvme_attach_controller" 00:32:27.108 } 00:32:27.108 EOF 00:32:27.108 )") 00:32:27.108 15:50:56 -- common/autotest_common.sh@1331 -- # grep libasan 00:32:27.108 15:50:56 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:32:27.108 15:50:56 -- target/dif.sh@72 -- # (( file = 1 )) 00:32:27.108 15:50:56 -- target/dif.sh@72 -- # (( file <= files )) 00:32:27.108 15:50:56 -- target/dif.sh@73 -- # cat 00:32:27.108 15:50:56 -- nvmf/common.sh@543 -- # cat 00:32:27.108 15:50:56 -- target/dif.sh@72 -- # (( file++ )) 00:32:27.108 15:50:56 -- target/dif.sh@72 -- # (( file <= files )) 00:32:27.108 15:50:56 -- nvmf/common.sh@545 -- # jq . 00:32:27.108 15:50:56 -- nvmf/common.sh@546 -- # IFS=, 00:32:27.108 15:50:56 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:32:27.108 "params": { 00:32:27.108 "name": "Nvme0", 00:32:27.108 "trtype": "tcp", 00:32:27.108 "traddr": "10.0.0.2", 00:32:27.108 "adrfam": "ipv4", 00:32:27.108 "trsvcid": "4420", 00:32:27.108 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:27.108 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:27.108 "hdgst": false, 00:32:27.108 "ddgst": false 00:32:27.108 }, 00:32:27.108 "method": "bdev_nvme_attach_controller" 00:32:27.108 },{ 00:32:27.108 "params": { 00:32:27.108 "name": "Nvme1", 00:32:27.108 "trtype": "tcp", 00:32:27.108 "traddr": "10.0.0.2", 00:32:27.108 "adrfam": "ipv4", 00:32:27.108 "trsvcid": "4420", 00:32:27.108 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:27.108 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:27.108 "hdgst": false, 00:32:27.108 "ddgst": false 00:32:27.108 }, 00:32:27.108 "method": "bdev_nvme_attach_controller" 00:32:27.108 }' 00:32:27.108 15:50:56 -- common/autotest_common.sh@1331 -- # asan_lib= 00:32:27.108 15:50:56 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:32:27.108 15:50:56 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:32:27.108 15:50:56 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:32:27.108 15:50:56 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:27.108 15:50:56 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:32:27.108 15:50:56 -- common/autotest_common.sh@1331 -- # asan_lib= 00:32:27.108 15:50:56 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:32:27.108 15:50:56 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:32:27.108 15:50:56 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:27.108 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:32:27.108 ... 00:32:27.108 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:32:27.108 ... 00:32:27.108 fio-3.35 00:32:27.108 Starting 4 threads 00:32:32.381 00:32:32.381 filename0: (groupid=0, jobs=1): err= 0: pid=90895: Fri Apr 26 15:51:01 2024 00:32:32.381 read: IOPS=1860, BW=14.5MiB/s (15.2MB/s)(72.7MiB/5002msec) 00:32:32.381 slat (nsec): min=6609, max=44613, avg=8817.74, stdev=2584.78 00:32:32.381 clat (usec): min=1784, max=7997, avg=4256.84, stdev=308.30 00:32:32.381 lat (usec): min=1792, max=8004, avg=4265.66, stdev=308.28 00:32:32.381 clat percentiles (usec): 00:32:32.381 | 1.00th=[ 3752], 5.00th=[ 4080], 10.00th=[ 4080], 20.00th=[ 4113], 00:32:32.381 | 30.00th=[ 4146], 40.00th=[ 4178], 50.00th=[ 4228], 60.00th=[ 4228], 00:32:32.381 | 70.00th=[ 4293], 80.00th=[ 4293], 90.00th=[ 4424], 95.00th=[ 4686], 00:32:32.381 | 99.00th=[ 5669], 99.50th=[ 5800], 99.90th=[ 6390], 99.95th=[ 7767], 00:32:32.381 | 99.99th=[ 8029] 00:32:32.381 bw ( KiB/s): min=14208, max=15268, per=25.04%, avg=14862.89, stdev=295.72, samples=9 00:32:32.381 iops : min= 1776, max= 1908, avg=1857.78, stdev=36.88, samples=9 00:32:32.381 lat (msec) : 2=0.17%, 4=1.04%, 10=98.79% 00:32:32.381 cpu : usr=93.42%, sys=5.30%, ctx=4, majf=0, minf=0 00:32:32.381 IO depths : 1=9.7%, 2=25.0%, 4=50.0%, 8=15.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:32.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:32.381 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:32.381 issued rwts: total=9304,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:32.381 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:32.381 filename0: (groupid=0, jobs=1): err= 0: pid=90896: Fri Apr 26 15:51:01 2024 00:32:32.381 read: IOPS=1852, BW=14.5MiB/s (15.2MB/s)(72.4MiB/5002msec) 00:32:32.381 slat (usec): min=3, max=108, avg=15.05, stdev= 5.07 00:32:32.381 clat (usec): min=2205, max=9340, avg=4247.53, stdev=355.28 00:32:32.381 lat (usec): min=2217, max=9353, avg=4262.58, stdev=355.06 00:32:32.381 clat percentiles (usec): 00:32:32.381 | 1.00th=[ 3949], 5.00th=[ 4047], 10.00th=[ 4047], 20.00th=[ 4080], 00:32:32.381 | 30.00th=[ 4113], 40.00th=[ 4146], 50.00th=[ 4178], 60.00th=[ 4228], 00:32:32.381 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4424], 95.00th=[ 4686], 00:32:32.381 | 99.00th=[ 5866], 99.50th=[ 6325], 99.90th=[ 6980], 99.95th=[ 7898], 00:32:32.381 | 99.99th=[ 9372] 00:32:32.381 bw ( KiB/s): min=14208, max=15232, per=24.91%, avg=14786.00, stdev=293.90, samples=9 00:32:32.381 iops : min= 1776, max= 1904, avg=1848.22, stdev=36.75, samples=9 00:32:32.381 lat (msec) : 4=2.09%, 10=97.91% 00:32:32.381 cpu : usr=93.24%, sys=5.50%, ctx=8, majf=0, minf=9 00:32:32.381 IO depths : 1=11.0%, 2=25.0%, 4=50.0%, 8=14.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:32.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:32.381 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:32.381 issued rwts: total=9264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:32.381 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:32.381 filename1: (groupid=0, jobs=1): err= 0: pid=90897: Fri Apr 26 15:51:01 2024 00:32:32.381 read: IOPS=1852, BW=14.5MiB/s (15.2MB/s)(72.4MiB/5002msec) 00:32:32.381 slat (nsec): min=4060, max=58090, avg=14486.61, stdev=4963.89 00:32:32.381 clat (usec): min=1726, max=9333, avg=4246.90, stdev=390.31 00:32:32.381 lat (usec): min=1737, max=9341, avg=4261.38, stdev=390.16 00:32:32.381 clat percentiles (usec): 00:32:32.381 | 1.00th=[ 3949], 5.00th=[ 4047], 10.00th=[ 4047], 20.00th=[ 4080], 00:32:32.381 | 30.00th=[ 4113], 40.00th=[ 4146], 50.00th=[ 4178], 60.00th=[ 4228], 00:32:32.381 | 70.00th=[ 4228], 80.00th=[ 4293], 90.00th=[ 4424], 95.00th=[ 4686], 00:32:32.381 | 99.00th=[ 6063], 99.50th=[ 6718], 99.90th=[ 7898], 99.95th=[ 8029], 00:32:32.381 | 99.99th=[ 9372] 00:32:32.381 bw ( KiB/s): min=14208, max=15248, per=24.92%, avg=14791.11, stdev=295.90, samples=9 00:32:32.381 iops : min= 1776, max= 1906, avg=1848.89, stdev=36.99, samples=9 00:32:32.381 lat (msec) : 2=0.05%, 4=1.93%, 10=98.01% 00:32:32.381 cpu : usr=93.80%, sys=5.00%, ctx=28, majf=0, minf=9 00:32:32.381 IO depths : 1=11.2%, 2=25.0%, 4=50.0%, 8=13.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:32.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:32.381 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:32.381 issued rwts: total=9264,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:32.381 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:32.381 filename1: (groupid=0, jobs=1): err= 0: pid=90898: Fri Apr 26 15:51:01 2024 00:32:32.381 read: IOPS=1854, BW=14.5MiB/s (15.2MB/s)(72.5MiB/5001msec) 00:32:32.381 slat (nsec): min=7098, max=52409, avg=11224.64, stdev=4462.27 00:32:32.381 clat (usec): min=2156, max=9388, avg=4265.81, stdev=351.67 00:32:32.381 lat (usec): min=2164, max=9404, avg=4277.03, stdev=351.82 00:32:32.381 clat percentiles (usec): 00:32:32.381 | 1.00th=[ 3556], 5.00th=[ 4080], 10.00th=[ 4113], 20.00th=[ 4113], 00:32:32.381 | 30.00th=[ 4146], 40.00th=[ 4178], 50.00th=[ 4228], 60.00th=[ 4228], 00:32:32.381 | 70.00th=[ 4293], 80.00th=[ 4293], 90.00th=[ 4424], 95.00th=[ 4686], 00:32:32.381 | 99.00th=[ 5866], 99.50th=[ 6325], 99.90th=[ 7046], 99.95th=[ 7898], 00:32:32.381 | 99.99th=[ 9372] 00:32:32.381 bw ( KiB/s): min=14208, max=15316, per=24.94%, avg=14803.67, stdev=283.22, samples=9 00:32:32.381 iops : min= 1776, max= 1914, avg=1850.33, stdev=35.31, samples=9 00:32:32.381 lat (msec) : 4=1.41%, 10=98.59% 00:32:32.381 cpu : usr=94.70%, sys=4.12%, ctx=5, majf=0, minf=9 00:32:32.381 IO depths : 1=8.2%, 2=18.0%, 4=57.0%, 8=16.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:32.381 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:32.381 complete : 0=0.0%, 4=89.3%, 8=10.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:32.381 issued rwts: total=9275,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:32.381 latency : target=0, window=0, percentile=100.00%, depth=8 00:32:32.381 00:32:32.381 Run status group 0 (all jobs): 00:32:32.381 READ: bw=58.0MiB/s (60.8MB/s), 14.5MiB/s-14.5MiB/s (15.2MB/s-15.2MB/s), io=290MiB (304MB), run=5001-5002msec 00:32:32.381 15:51:02 -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:32:32.381 15:51:02 -- target/dif.sh@43 -- # local sub 00:32:32.381 15:51:02 -- target/dif.sh@45 -- # for sub in "$@" 00:32:32.381 15:51:02 -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:32.381 15:51:02 -- target/dif.sh@36 -- # local sub_id=0 00:32:32.381 15:51:02 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:32.381 15:51:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:32.381 15:51:02 -- common/autotest_common.sh@10 -- # set +x 00:32:32.381 15:51:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:32.381 15:51:02 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:32.381 15:51:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:32.381 15:51:02 -- common/autotest_common.sh@10 -- # set +x 00:32:32.381 15:51:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:32.381 15:51:02 -- target/dif.sh@45 -- # for sub in "$@" 00:32:32.382 15:51:02 -- target/dif.sh@46 -- # destroy_subsystem 1 00:32:32.382 15:51:02 -- target/dif.sh@36 -- # local sub_id=1 00:32:32.382 15:51:02 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:32.382 15:51:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:32.382 15:51:02 -- common/autotest_common.sh@10 -- # set +x 00:32:32.382 15:51:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:32.382 15:51:02 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:32:32.382 15:51:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:32.382 15:51:02 -- common/autotest_common.sh@10 -- # set +x 00:32:32.382 ************************************ 00:32:32.382 END TEST fio_dif_rand_params 00:32:32.382 ************************************ 00:32:32.382 15:51:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:32.382 00:32:32.382 real 0m23.921s 00:32:32.382 user 2m7.149s 00:32:32.382 sys 0m5.224s 00:32:32.382 15:51:02 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:32.382 15:51:02 -- common/autotest_common.sh@10 -- # set +x 00:32:32.382 15:51:02 -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:32:32.382 15:51:02 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:32.382 15:51:02 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:32.382 15:51:02 -- common/autotest_common.sh@10 -- # set +x 00:32:32.382 ************************************ 00:32:32.382 START TEST fio_dif_digest 00:32:32.382 ************************************ 00:32:32.382 15:51:02 -- common/autotest_common.sh@1111 -- # fio_dif_digest 00:32:32.382 15:51:02 -- target/dif.sh@123 -- # local NULL_DIF 00:32:32.382 15:51:02 -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:32:32.382 15:51:02 -- target/dif.sh@125 -- # local hdgst ddgst 00:32:32.382 15:51:02 -- target/dif.sh@127 -- # NULL_DIF=3 00:32:32.382 15:51:02 -- target/dif.sh@127 -- # bs=128k,128k,128k 00:32:32.382 15:51:02 -- target/dif.sh@127 -- # numjobs=3 00:32:32.382 15:51:02 -- target/dif.sh@127 -- # iodepth=3 00:32:32.382 15:51:02 -- target/dif.sh@127 -- # runtime=10 00:32:32.382 15:51:02 -- target/dif.sh@128 -- # hdgst=true 00:32:32.382 15:51:02 -- target/dif.sh@128 -- # ddgst=true 00:32:32.382 15:51:02 -- target/dif.sh@130 -- # create_subsystems 0 00:32:32.382 15:51:02 -- target/dif.sh@28 -- # local sub 00:32:32.382 15:51:02 -- target/dif.sh@30 -- # for sub in "$@" 00:32:32.382 15:51:02 -- target/dif.sh@31 -- # create_subsystem 0 00:32:32.382 15:51:02 -- target/dif.sh@18 -- # local sub_id=0 00:32:32.382 15:51:02 -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:32:32.382 15:51:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:32.382 15:51:02 -- common/autotest_common.sh@10 -- # set +x 00:32:32.382 bdev_null0 00:32:32.382 15:51:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:32.382 15:51:02 -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:32:32.382 15:51:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:32.382 15:51:02 -- common/autotest_common.sh@10 -- # set +x 00:32:32.382 15:51:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:32.382 15:51:02 -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:32:32.382 15:51:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:32.382 15:51:02 -- common/autotest_common.sh@10 -- # set +x 00:32:32.382 15:51:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:32.382 15:51:02 -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:32.382 15:51:02 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:32.382 15:51:02 -- common/autotest_common.sh@10 -- # set +x 00:32:32.382 [2024-04-26 15:51:02.399784] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:32.382 15:51:02 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:32.382 15:51:02 -- target/dif.sh@131 -- # fio /dev/fd/62 00:32:32.382 15:51:02 -- target/dif.sh@131 -- # create_json_sub_conf 0 00:32:32.382 15:51:02 -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:32:32.382 15:51:02 -- nvmf/common.sh@521 -- # config=() 00:32:32.382 15:51:02 -- nvmf/common.sh@521 -- # local subsystem config 00:32:32.382 15:51:02 -- nvmf/common.sh@523 -- # for subsystem in "${@:-1}" 00:32:32.382 15:51:02 -- target/dif.sh@82 -- # gen_fio_conf 00:32:32.382 15:51:02 -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:32.382 15:51:02 -- common/autotest_common.sh@1342 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:32.382 15:51:02 -- common/autotest_common.sh@1323 -- # local fio_dir=/usr/src/fio 00:32:32.382 15:51:02 -- target/dif.sh@54 -- # local file 00:32:32.382 15:51:02 -- target/dif.sh@56 -- # cat 00:32:32.382 15:51:02 -- common/autotest_common.sh@1325 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:32.382 15:51:02 -- common/autotest_common.sh@1325 -- # local sanitizers 00:32:32.382 15:51:02 -- nvmf/common.sh@543 -- # config+=("$(cat <<-EOF 00:32:32.382 { 00:32:32.382 "params": { 00:32:32.382 "name": "Nvme$subsystem", 00:32:32.382 "trtype": "$TEST_TRANSPORT", 00:32:32.382 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:32.382 "adrfam": "ipv4", 00:32:32.382 "trsvcid": "$NVMF_PORT", 00:32:32.382 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:32.382 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:32.382 "hdgst": ${hdgst:-false}, 00:32:32.382 "ddgst": ${ddgst:-false} 00:32:32.382 }, 00:32:32.382 "method": "bdev_nvme_attach_controller" 00:32:32.382 } 00:32:32.382 EOF 00:32:32.382 )") 00:32:32.382 15:51:02 -- common/autotest_common.sh@1326 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:32.382 15:51:02 -- common/autotest_common.sh@1327 -- # shift 00:32:32.382 15:51:02 -- common/autotest_common.sh@1329 -- # local asan_lib= 00:32:32.382 15:51:02 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:32:32.382 15:51:02 -- target/dif.sh@72 -- # (( file = 1 )) 00:32:32.382 15:51:02 -- nvmf/common.sh@543 -- # cat 00:32:32.382 15:51:02 -- target/dif.sh@72 -- # (( file <= files )) 00:32:32.382 15:51:02 -- common/autotest_common.sh@1331 -- # grep libasan 00:32:32.382 15:51:02 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:32.382 15:51:02 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:32:32.382 15:51:02 -- nvmf/common.sh@545 -- # jq . 00:32:32.382 15:51:02 -- nvmf/common.sh@546 -- # IFS=, 00:32:32.382 15:51:02 -- nvmf/common.sh@547 -- # printf '%s\n' '{ 00:32:32.382 "params": { 00:32:32.382 "name": "Nvme0", 00:32:32.382 "trtype": "tcp", 00:32:32.382 "traddr": "10.0.0.2", 00:32:32.382 "adrfam": "ipv4", 00:32:32.382 "trsvcid": "4420", 00:32:32.382 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:32.382 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:32:32.382 "hdgst": true, 00:32:32.382 "ddgst": true 00:32:32.382 }, 00:32:32.382 "method": "bdev_nvme_attach_controller" 00:32:32.382 }' 00:32:32.382 15:51:02 -- common/autotest_common.sh@1331 -- # asan_lib= 00:32:32.382 15:51:02 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:32:32.382 15:51:02 -- common/autotest_common.sh@1330 -- # for sanitizer in "${sanitizers[@]}" 00:32:32.382 15:51:02 -- common/autotest_common.sh@1331 -- # grep libclang_rt.asan 00:32:32.382 15:51:02 -- common/autotest_common.sh@1331 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:32:32.382 15:51:02 -- common/autotest_common.sh@1331 -- # awk '{print $3}' 00:32:32.382 15:51:02 -- common/autotest_common.sh@1331 -- # asan_lib= 00:32:32.382 15:51:02 -- common/autotest_common.sh@1332 -- # [[ -n '' ]] 00:32:32.382 15:51:02 -- common/autotest_common.sh@1338 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:32:32.382 15:51:02 -- common/autotest_common.sh@1338 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:32:32.382 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:32:32.382 ... 00:32:32.382 fio-3.35 00:32:32.382 Starting 3 threads 00:32:44.606 00:32:44.606 filename0: (groupid=0, jobs=1): err= 0: pid=91008: Fri Apr 26 15:51:13 2024 00:32:44.606 read: IOPS=157, BW=19.7MiB/s (20.7MB/s)(198MiB/10044msec) 00:32:44.606 slat (nsec): min=7378, max=77612, avg=13409.51, stdev=5272.87 00:32:44.606 clat (usec): min=10870, max=50213, avg=18968.11, stdev=2152.93 00:32:44.606 lat (usec): min=10880, max=50225, avg=18981.52, stdev=2153.51 00:32:44.606 clat percentiles (usec): 00:32:44.606 | 1.00th=[12780], 5.00th=[16450], 10.00th=[16909], 20.00th=[17695], 00:32:44.606 | 30.00th=[18220], 40.00th=[18482], 50.00th=[18744], 60.00th=[19268], 00:32:44.606 | 70.00th=[19792], 80.00th=[20055], 90.00th=[20841], 95.00th=[21890], 00:32:44.606 | 99.00th=[25035], 99.50th=[25822], 99.90th=[48497], 99.95th=[50070], 00:32:44.606 | 99.99th=[50070] 00:32:44.606 bw ( KiB/s): min=18212, max=21504, per=27.24%, avg=20185.47, stdev=836.83, samples=19 00:32:44.606 iops : min= 142, max= 168, avg=157.68, stdev= 6.57, samples=19 00:32:44.606 lat (msec) : 20=76.85%, 50=23.09%, 100=0.06% 00:32:44.606 cpu : usr=93.10%, sys=5.71%, ctx=16, majf=0, minf=0 00:32:44.606 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:44.606 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.606 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.606 issued rwts: total=1585,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:44.606 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:44.606 filename0: (groupid=0, jobs=1): err= 0: pid=91009: Fri Apr 26 15:51:13 2024 00:32:44.606 read: IOPS=219, BW=27.4MiB/s (28.7MB/s)(274MiB/10006msec) 00:32:44.606 slat (nsec): min=4432, max=50680, avg=15024.98, stdev=5022.55 00:32:44.606 clat (usec): min=8178, max=57955, avg=13669.10, stdev=2601.11 00:32:44.606 lat (usec): min=8189, max=57971, avg=13684.12, stdev=2601.42 00:32:44.606 clat percentiles (usec): 00:32:44.606 | 1.00th=[11076], 5.00th=[11994], 10.00th=[12387], 20.00th=[12780], 00:32:44.606 | 30.00th=[13042], 40.00th=[13304], 50.00th=[13435], 60.00th=[13698], 00:32:44.606 | 70.00th=[13829], 80.00th=[14091], 90.00th=[14615], 95.00th=[15795], 00:32:44.606 | 99.00th=[18482], 99.50th=[20055], 99.90th=[57410], 99.95th=[57934], 00:32:44.606 | 99.99th=[57934] 00:32:44.606 bw ( KiB/s): min=23040, max=29952, per=37.69%, avg=27927.95, stdev=1606.66, samples=19 00:32:44.606 iops : min= 180, max= 234, avg=218.16, stdev=12.54, samples=19 00:32:44.606 lat (msec) : 10=0.64%, 20=98.86%, 50=0.23%, 100=0.27% 00:32:44.606 cpu : usr=92.31%, sys=6.11%, ctx=26, majf=0, minf=0 00:32:44.606 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:44.606 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.606 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.606 issued rwts: total=2193,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:44.606 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:44.606 filename0: (groupid=0, jobs=1): err= 0: pid=91010: Fri Apr 26 15:51:13 2024 00:32:44.606 read: IOPS=203, BW=25.4MiB/s (26.7MB/s)(255MiB/10004msec) 00:32:44.606 slat (nsec): min=7307, max=42614, avg=13514.11, stdev=3847.61 00:32:44.606 clat (usec): min=7779, max=57733, avg=14722.50, stdev=2217.38 00:32:44.606 lat (usec): min=7790, max=57744, avg=14736.02, stdev=2217.42 00:32:44.606 clat percentiles (usec): 00:32:44.606 | 1.00th=[11863], 5.00th=[12649], 10.00th=[13173], 20.00th=[13566], 00:32:44.606 | 30.00th=[13960], 40.00th=[14222], 50.00th=[14484], 60.00th=[14746], 00:32:44.606 | 70.00th=[15139], 80.00th=[15533], 90.00th=[16188], 95.00th=[17433], 00:32:44.606 | 99.00th=[20055], 99.50th=[21627], 99.90th=[55313], 99.95th=[56886], 00:32:44.606 | 99.99th=[57934] 00:32:44.606 bw ( KiB/s): min=23040, max=27648, per=35.06%, avg=25977.26, stdev=1198.83, samples=19 00:32:44.606 iops : min= 180, max= 216, avg=202.95, stdev= 9.37, samples=19 00:32:44.606 lat (msec) : 10=0.59%, 20=98.28%, 50=0.98%, 100=0.15% 00:32:44.606 cpu : usr=92.79%, sys=5.77%, ctx=6, majf=0, minf=0 00:32:44.606 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:32:44.606 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.606 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:44.606 issued rwts: total=2036,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:44.606 latency : target=0, window=0, percentile=100.00%, depth=3 00:32:44.606 00:32:44.606 Run status group 0 (all jobs): 00:32:44.606 READ: bw=72.4MiB/s (75.9MB/s), 19.7MiB/s-27.4MiB/s (20.7MB/s-28.7MB/s), io=727MiB (762MB), run=10004-10044msec 00:32:44.606 15:51:13 -- target/dif.sh@132 -- # destroy_subsystems 0 00:32:44.606 15:51:13 -- target/dif.sh@43 -- # local sub 00:32:44.606 15:51:13 -- target/dif.sh@45 -- # for sub in "$@" 00:32:44.606 15:51:13 -- target/dif.sh@46 -- # destroy_subsystem 0 00:32:44.606 15:51:13 -- target/dif.sh@36 -- # local sub_id=0 00:32:44.606 15:51:13 -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:44.606 15:51:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:44.606 15:51:13 -- common/autotest_common.sh@10 -- # set +x 00:32:44.606 15:51:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:44.606 15:51:13 -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:32:44.606 15:51:13 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:44.606 15:51:13 -- common/autotest_common.sh@10 -- # set +x 00:32:44.606 ************************************ 00:32:44.606 END TEST fio_dif_digest 00:32:44.606 ************************************ 00:32:44.606 15:51:13 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:44.606 00:32:44.606 real 0m11.076s 00:32:44.606 user 0m28.597s 00:32:44.606 sys 0m2.025s 00:32:44.606 15:51:13 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:44.606 15:51:13 -- common/autotest_common.sh@10 -- # set +x 00:32:44.606 15:51:13 -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:32:44.606 15:51:13 -- target/dif.sh@147 -- # nvmftestfini 00:32:44.606 15:51:13 -- nvmf/common.sh@477 -- # nvmfcleanup 00:32:44.606 15:51:13 -- nvmf/common.sh@117 -- # sync 00:32:44.606 15:51:13 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:44.606 15:51:13 -- nvmf/common.sh@120 -- # set +e 00:32:44.606 15:51:13 -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:44.606 15:51:13 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:44.606 rmmod nvme_tcp 00:32:44.606 rmmod nvme_fabrics 00:32:44.606 rmmod nvme_keyring 00:32:44.606 15:51:13 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:44.606 15:51:13 -- nvmf/common.sh@124 -- # set -e 00:32:44.606 15:51:13 -- nvmf/common.sh@125 -- # return 0 00:32:44.606 15:51:13 -- nvmf/common.sh@478 -- # '[' -n 90220 ']' 00:32:44.606 15:51:13 -- nvmf/common.sh@479 -- # killprocess 90220 00:32:44.606 15:51:13 -- common/autotest_common.sh@936 -- # '[' -z 90220 ']' 00:32:44.606 15:51:13 -- common/autotest_common.sh@940 -- # kill -0 90220 00:32:44.606 15:51:13 -- common/autotest_common.sh@941 -- # uname 00:32:44.606 15:51:13 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:44.606 15:51:13 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 90220 00:32:44.606 killing process with pid 90220 00:32:44.606 15:51:13 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:32:44.606 15:51:13 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:32:44.606 15:51:13 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 90220' 00:32:44.606 15:51:13 -- common/autotest_common.sh@955 -- # kill 90220 00:32:44.606 15:51:13 -- common/autotest_common.sh@960 -- # wait 90220 00:32:44.606 15:51:13 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:32:44.606 15:51:13 -- nvmf/common.sh@482 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:32:44.606 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:44.606 Waiting for block devices as requested 00:32:44.606 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:32:44.606 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:32:44.606 15:51:14 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:32:44.606 15:51:14 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:32:44.606 15:51:14 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:44.606 15:51:14 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:44.606 15:51:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:44.606 15:51:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:44.606 15:51:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:44.606 15:51:14 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:32:44.606 00:32:44.606 real 1m0.882s 00:32:44.606 user 3m52.215s 00:32:44.606 sys 0m15.957s 00:32:44.606 15:51:14 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:44.606 15:51:14 -- common/autotest_common.sh@10 -- # set +x 00:32:44.606 ************************************ 00:32:44.606 END TEST nvmf_dif 00:32:44.606 ************************************ 00:32:44.606 15:51:14 -- spdk/autotest.sh@291 -- # run_test nvmf_abort_qd_sizes /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:44.607 15:51:14 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:44.607 15:51:14 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:44.607 15:51:14 -- common/autotest_common.sh@10 -- # set +x 00:32:44.607 ************************************ 00:32:44.607 START TEST nvmf_abort_qd_sizes 00:32:44.607 ************************************ 00:32:44.607 15:51:14 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/nvmf/target/abort_qd_sizes.sh 00:32:44.607 * Looking for test storage... 00:32:44.607 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvmf/target 00:32:44.607 15:51:14 -- target/abort_qd_sizes.sh@14 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:32:44.607 15:51:14 -- nvmf/common.sh@7 -- # uname -s 00:32:44.607 15:51:14 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:44.607 15:51:14 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:44.607 15:51:14 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:44.607 15:51:14 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:44.607 15:51:14 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:44.607 15:51:14 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:44.607 15:51:14 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:44.607 15:51:14 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:44.607 15:51:14 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:44.607 15:51:14 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:44.607 15:51:14 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:32:44.607 15:51:14 -- nvmf/common.sh@18 -- # NVME_HOSTID=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:32:44.607 15:51:14 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:44.607 15:51:14 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:44.607 15:51:14 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:32:44.607 15:51:14 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:44.607 15:51:14 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:32:44.607 15:51:14 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:44.607 15:51:14 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:44.607 15:51:14 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:44.607 15:51:14 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:44.607 15:51:14 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:44.607 15:51:14 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:44.607 15:51:14 -- paths/export.sh@5 -- # export PATH 00:32:44.607 15:51:14 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:44.607 15:51:14 -- nvmf/common.sh@47 -- # : 0 00:32:44.607 15:51:14 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:44.607 15:51:14 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:44.607 15:51:14 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:44.607 15:51:14 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:44.607 15:51:14 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:44.607 15:51:14 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:44.607 15:51:14 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:44.607 15:51:14 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:44.607 15:51:14 -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:32:44.607 15:51:14 -- nvmf/common.sh@430 -- # '[' -z tcp ']' 00:32:44.607 15:51:14 -- nvmf/common.sh@435 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:44.607 15:51:14 -- nvmf/common.sh@437 -- # prepare_net_devs 00:32:44.607 15:51:14 -- nvmf/common.sh@399 -- # local -g is_hw=no 00:32:44.607 15:51:14 -- nvmf/common.sh@401 -- # remove_spdk_ns 00:32:44.607 15:51:14 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:44.607 15:51:14 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:32:44.607 15:51:14 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:44.607 15:51:14 -- nvmf/common.sh@403 -- # [[ virt != virt ]] 00:32:44.607 15:51:14 -- nvmf/common.sh@405 -- # [[ no == yes ]] 00:32:44.607 15:51:14 -- nvmf/common.sh@412 -- # [[ virt == phy ]] 00:32:44.607 15:51:14 -- nvmf/common.sh@415 -- # [[ virt == phy-fallback ]] 00:32:44.607 15:51:14 -- nvmf/common.sh@420 -- # [[ tcp == tcp ]] 00:32:44.607 15:51:14 -- nvmf/common.sh@421 -- # nvmf_veth_init 00:32:44.607 15:51:14 -- nvmf/common.sh@141 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:44.607 15:51:14 -- nvmf/common.sh@142 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:44.607 15:51:14 -- nvmf/common.sh@143 -- # NVMF_SECOND_TARGET_IP=10.0.0.3 00:32:44.607 15:51:14 -- nvmf/common.sh@144 -- # NVMF_BRIDGE=nvmf_br 00:32:44.607 15:51:14 -- nvmf/common.sh@145 -- # NVMF_INITIATOR_INTERFACE=nvmf_init_if 00:32:44.607 15:51:14 -- nvmf/common.sh@146 -- # NVMF_INITIATOR_BRIDGE=nvmf_init_br 00:32:44.607 15:51:14 -- nvmf/common.sh@147 -- # NVMF_TARGET_NAMESPACE=nvmf_tgt_ns_spdk 00:32:44.607 15:51:14 -- nvmf/common.sh@148 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:44.607 15:51:14 -- nvmf/common.sh@149 -- # NVMF_TARGET_INTERFACE=nvmf_tgt_if 00:32:44.607 15:51:14 -- nvmf/common.sh@150 -- # NVMF_TARGET_INTERFACE2=nvmf_tgt_if2 00:32:44.607 15:51:14 -- nvmf/common.sh@151 -- # NVMF_TARGET_BRIDGE=nvmf_tgt_br 00:32:44.607 15:51:14 -- nvmf/common.sh@152 -- # NVMF_TARGET_BRIDGE2=nvmf_tgt_br2 00:32:44.607 15:51:14 -- nvmf/common.sh@154 -- # ip link set nvmf_init_br nomaster 00:32:44.607 15:51:14 -- nvmf/common.sh@155 -- # ip link set nvmf_tgt_br nomaster 00:32:44.607 Cannot find device "nvmf_tgt_br" 00:32:44.607 15:51:14 -- nvmf/common.sh@155 -- # true 00:32:44.607 15:51:14 -- nvmf/common.sh@156 -- # ip link set nvmf_tgt_br2 nomaster 00:32:44.607 Cannot find device "nvmf_tgt_br2" 00:32:44.607 15:51:14 -- nvmf/common.sh@156 -- # true 00:32:44.607 15:51:14 -- nvmf/common.sh@157 -- # ip link set nvmf_init_br down 00:32:44.607 15:51:14 -- nvmf/common.sh@158 -- # ip link set nvmf_tgt_br down 00:32:44.607 Cannot find device "nvmf_tgt_br" 00:32:44.607 15:51:14 -- nvmf/common.sh@158 -- # true 00:32:44.607 15:51:14 -- nvmf/common.sh@159 -- # ip link set nvmf_tgt_br2 down 00:32:44.607 Cannot find device "nvmf_tgt_br2" 00:32:44.607 15:51:14 -- nvmf/common.sh@159 -- # true 00:32:44.607 15:51:14 -- nvmf/common.sh@160 -- # ip link delete nvmf_br type bridge 00:32:44.607 15:51:14 -- nvmf/common.sh@161 -- # ip link delete nvmf_init_if 00:32:44.607 15:51:14 -- nvmf/common.sh@162 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if 00:32:44.607 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:44.607 15:51:14 -- nvmf/common.sh@162 -- # true 00:32:44.607 15:51:14 -- nvmf/common.sh@163 -- # ip netns exec nvmf_tgt_ns_spdk ip link delete nvmf_tgt_if2 00:32:44.607 Cannot open network namespace "nvmf_tgt_ns_spdk": No such file or directory 00:32:44.607 15:51:14 -- nvmf/common.sh@163 -- # true 00:32:44.607 15:51:14 -- nvmf/common.sh@166 -- # ip netns add nvmf_tgt_ns_spdk 00:32:44.607 15:51:14 -- nvmf/common.sh@169 -- # ip link add nvmf_init_if type veth peer name nvmf_init_br 00:32:44.607 15:51:14 -- nvmf/common.sh@170 -- # ip link add nvmf_tgt_if type veth peer name nvmf_tgt_br 00:32:44.607 15:51:14 -- nvmf/common.sh@171 -- # ip link add nvmf_tgt_if2 type veth peer name nvmf_tgt_br2 00:32:44.866 15:51:14 -- nvmf/common.sh@174 -- # ip link set nvmf_tgt_if netns nvmf_tgt_ns_spdk 00:32:44.866 15:51:14 -- nvmf/common.sh@175 -- # ip link set nvmf_tgt_if2 netns nvmf_tgt_ns_spdk 00:32:44.866 15:51:14 -- nvmf/common.sh@178 -- # ip addr add 10.0.0.1/24 dev nvmf_init_if 00:32:44.866 15:51:14 -- nvmf/common.sh@179 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.2/24 dev nvmf_tgt_if 00:32:44.866 15:51:14 -- nvmf/common.sh@180 -- # ip netns exec nvmf_tgt_ns_spdk ip addr add 10.0.0.3/24 dev nvmf_tgt_if2 00:32:44.866 15:51:14 -- nvmf/common.sh@183 -- # ip link set nvmf_init_if up 00:32:44.866 15:51:14 -- nvmf/common.sh@184 -- # ip link set nvmf_init_br up 00:32:44.866 15:51:14 -- nvmf/common.sh@185 -- # ip link set nvmf_tgt_br up 00:32:44.866 15:51:14 -- nvmf/common.sh@186 -- # ip link set nvmf_tgt_br2 up 00:32:44.867 15:51:14 -- nvmf/common.sh@187 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if up 00:32:44.867 15:51:14 -- nvmf/common.sh@188 -- # ip netns exec nvmf_tgt_ns_spdk ip link set nvmf_tgt_if2 up 00:32:44.867 15:51:14 -- nvmf/common.sh@189 -- # ip netns exec nvmf_tgt_ns_spdk ip link set lo up 00:32:44.867 15:51:15 -- nvmf/common.sh@192 -- # ip link add nvmf_br type bridge 00:32:44.867 15:51:15 -- nvmf/common.sh@193 -- # ip link set nvmf_br up 00:32:44.867 15:51:15 -- nvmf/common.sh@196 -- # ip link set nvmf_init_br master nvmf_br 00:32:44.867 15:51:15 -- nvmf/common.sh@197 -- # ip link set nvmf_tgt_br master nvmf_br 00:32:44.867 15:51:15 -- nvmf/common.sh@198 -- # ip link set nvmf_tgt_br2 master nvmf_br 00:32:44.867 15:51:15 -- nvmf/common.sh@201 -- # iptables -I INPUT 1 -i nvmf_init_if -p tcp --dport 4420 -j ACCEPT 00:32:44.867 15:51:15 -- nvmf/common.sh@202 -- # iptables -A FORWARD -i nvmf_br -o nvmf_br -j ACCEPT 00:32:44.867 15:51:15 -- nvmf/common.sh@205 -- # ping -c 1 10.0.0.2 00:32:44.867 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:44.867 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.112 ms 00:32:44.867 00:32:44.867 --- 10.0.0.2 ping statistics --- 00:32:44.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:44.867 rtt min/avg/max/mdev = 0.112/0.112/0.112/0.000 ms 00:32:44.867 15:51:15 -- nvmf/common.sh@206 -- # ping -c 1 10.0.0.3 00:32:44.867 PING 10.0.0.3 (10.0.0.3) 56(84) bytes of data. 00:32:44.867 64 bytes from 10.0.0.3: icmp_seq=1 ttl=64 time=0.063 ms 00:32:44.867 00:32:44.867 --- 10.0.0.3 ping statistics --- 00:32:44.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:44.867 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:32:44.867 15:51:15 -- nvmf/common.sh@207 -- # ip netns exec nvmf_tgt_ns_spdk ping -c 1 10.0.0.1 00:32:44.867 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:44.867 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.040 ms 00:32:44.867 00:32:44.867 --- 10.0.0.1 ping statistics --- 00:32:44.867 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:44.867 rtt min/avg/max/mdev = 0.040/0.040/0.040/0.000 ms 00:32:44.867 15:51:15 -- nvmf/common.sh@209 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:44.867 15:51:15 -- nvmf/common.sh@422 -- # return 0 00:32:44.867 15:51:15 -- nvmf/common.sh@439 -- # '[' iso == iso ']' 00:32:44.867 15:51:15 -- nvmf/common.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:32:45.448 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:32:45.705 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:32:45.705 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:32:45.705 15:51:15 -- nvmf/common.sh@443 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:45.705 15:51:15 -- nvmf/common.sh@444 -- # [[ tcp == \r\d\m\a ]] 00:32:45.705 15:51:15 -- nvmf/common.sh@453 -- # [[ tcp == \t\c\p ]] 00:32:45.705 15:51:15 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:45.705 15:51:15 -- nvmf/common.sh@457 -- # '[' tcp == tcp ']' 00:32:45.705 15:51:15 -- nvmf/common.sh@463 -- # modprobe nvme-tcp 00:32:45.705 15:51:15 -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:32:45.705 15:51:15 -- nvmf/common.sh@468 -- # timing_enter start_nvmf_tgt 00:32:45.705 15:51:15 -- common/autotest_common.sh@710 -- # xtrace_disable 00:32:45.705 15:51:15 -- common/autotest_common.sh@10 -- # set +x 00:32:45.705 15:51:15 -- nvmf/common.sh@469 -- # ip netns exec nvmf_tgt_ns_spdk /home/vagrant/spdk_repo/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:32:45.705 15:51:15 -- nvmf/common.sh@470 -- # nvmfpid=91609 00:32:45.705 15:51:15 -- nvmf/common.sh@471 -- # waitforlisten 91609 00:32:45.705 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:45.705 15:51:15 -- common/autotest_common.sh@817 -- # '[' -z 91609 ']' 00:32:45.705 15:51:15 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:45.705 15:51:15 -- common/autotest_common.sh@822 -- # local max_retries=100 00:32:45.705 15:51:15 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:45.705 15:51:15 -- common/autotest_common.sh@826 -- # xtrace_disable 00:32:45.705 15:51:15 -- common/autotest_common.sh@10 -- # set +x 00:32:45.963 [2024-04-26 15:51:16.015250] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:32:45.963 [2024-04-26 15:51:16.015355] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:45.963 [2024-04-26 15:51:16.158451] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:46.221 [2024-04-26 15:51:16.296123] app.c: 523:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:46.221 [2024-04-26 15:51:16.296215] app.c: 524:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:46.221 [2024-04-26 15:51:16.296242] app.c: 529:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:46.221 [2024-04-26 15:51:16.296253] app.c: 530:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:46.221 [2024-04-26 15:51:16.296263] app.c: 531:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:46.221 [2024-04-26 15:51:16.296471] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:32:46.221 [2024-04-26 15:51:16.296629] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 2 00:32:46.221 [2024-04-26 15:51:16.297305] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 3 00:32:46.221 [2024-04-26 15:51:16.297315] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:32:46.787 15:51:17 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:32:46.787 15:51:17 -- common/autotest_common.sh@850 -- # return 0 00:32:46.787 15:51:17 -- nvmf/common.sh@472 -- # timing_exit start_nvmf_tgt 00:32:46.787 15:51:17 -- common/autotest_common.sh@716 -- # xtrace_disable 00:32:46.787 15:51:17 -- common/autotest_common.sh@10 -- # set +x 00:32:46.787 15:51:17 -- nvmf/common.sh@473 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:46.787 15:51:17 -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:32:46.787 15:51:17 -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:32:46.787 15:51:17 -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:32:46.787 15:51:17 -- scripts/common.sh@309 -- # local bdf bdfs 00:32:46.787 15:51:17 -- scripts/common.sh@310 -- # local nvmes 00:32:46.787 15:51:17 -- scripts/common.sh@312 -- # [[ -n '' ]] 00:32:46.787 15:51:17 -- scripts/common.sh@315 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:32:46.787 15:51:17 -- scripts/common.sh@315 -- # iter_pci_class_code 01 08 02 00:32:46.787 15:51:17 -- scripts/common.sh@295 -- # local bdf= 00:32:47.046 15:51:17 -- scripts/common.sh@297 -- # iter_all_pci_class_code 01 08 02 00:32:47.046 15:51:17 -- scripts/common.sh@230 -- # local class 00:32:47.046 15:51:17 -- scripts/common.sh@231 -- # local subclass 00:32:47.046 15:51:17 -- scripts/common.sh@232 -- # local progif 00:32:47.046 15:51:17 -- scripts/common.sh@233 -- # printf %02x 1 00:32:47.046 15:51:17 -- scripts/common.sh@233 -- # class=01 00:32:47.046 15:51:17 -- scripts/common.sh@234 -- # printf %02x 8 00:32:47.046 15:51:17 -- scripts/common.sh@234 -- # subclass=08 00:32:47.046 15:51:17 -- scripts/common.sh@235 -- # printf %02x 2 00:32:47.046 15:51:17 -- scripts/common.sh@235 -- # progif=02 00:32:47.046 15:51:17 -- scripts/common.sh@237 -- # hash lspci 00:32:47.046 15:51:17 -- scripts/common.sh@238 -- # '[' 02 '!=' 00 ']' 00:32:47.046 15:51:17 -- scripts/common.sh@239 -- # lspci -mm -n -D 00:32:47.046 15:51:17 -- scripts/common.sh@240 -- # grep -i -- -p02 00:32:47.046 15:51:17 -- scripts/common.sh@241 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:32:47.046 15:51:17 -- scripts/common.sh@242 -- # tr -d '"' 00:32:47.046 15:51:17 -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:32:47.046 15:51:17 -- scripts/common.sh@298 -- # pci_can_use 0000:00:10.0 00:32:47.046 15:51:17 -- scripts/common.sh@15 -- # local i 00:32:47.046 15:51:17 -- scripts/common.sh@18 -- # [[ =~ 0000:00:10.0 ]] 00:32:47.046 15:51:17 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:32:47.046 15:51:17 -- scripts/common.sh@24 -- # return 0 00:32:47.046 15:51:17 -- scripts/common.sh@299 -- # echo 0000:00:10.0 00:32:47.046 15:51:17 -- scripts/common.sh@297 -- # for bdf in $(iter_all_pci_class_code "$@") 00:32:47.046 15:51:17 -- scripts/common.sh@298 -- # pci_can_use 0000:00:11.0 00:32:47.046 15:51:17 -- scripts/common.sh@15 -- # local i 00:32:47.046 15:51:17 -- scripts/common.sh@18 -- # [[ =~ 0000:00:11.0 ]] 00:32:47.046 15:51:17 -- scripts/common.sh@22 -- # [[ -z '' ]] 00:32:47.046 15:51:17 -- scripts/common.sh@24 -- # return 0 00:32:47.046 15:51:17 -- scripts/common.sh@299 -- # echo 0000:00:11.0 00:32:47.046 15:51:17 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:32:47.046 15:51:17 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:32:47.046 15:51:17 -- scripts/common.sh@320 -- # uname -s 00:32:47.046 15:51:17 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:32:47.046 15:51:17 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:32:47.046 15:51:17 -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:32:47.046 15:51:17 -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:32:47.046 15:51:17 -- scripts/common.sh@320 -- # uname -s 00:32:47.046 15:51:17 -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:32:47.046 15:51:17 -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:32:47.046 15:51:17 -- scripts/common.sh@325 -- # (( 2 )) 00:32:47.046 15:51:17 -- scripts/common.sh@326 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:32:47.046 15:51:17 -- target/abort_qd_sizes.sh@76 -- # (( 2 > 0 )) 00:32:47.046 15:51:17 -- target/abort_qd_sizes.sh@78 -- # nvme=0000:00:10.0 00:32:47.046 15:51:17 -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:32:47.046 15:51:17 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:47.046 15:51:17 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:47.046 15:51:17 -- common/autotest_common.sh@10 -- # set +x 00:32:47.046 ************************************ 00:32:47.046 START TEST spdk_target_abort 00:32:47.046 ************************************ 00:32:47.046 15:51:17 -- common/autotest_common.sh@1111 -- # spdk_target 00:32:47.046 15:51:17 -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:32:47.046 15:51:17 -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:00:10.0 -b spdk_target 00:32:47.046 15:51:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:47.046 15:51:17 -- common/autotest_common.sh@10 -- # set +x 00:32:47.046 spdk_targetn1 00:32:47.046 15:51:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:47.046 15:51:17 -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:47.046 15:51:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:47.046 15:51:17 -- common/autotest_common.sh@10 -- # set +x 00:32:47.046 [2024-04-26 15:51:17.261545] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:47.046 15:51:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:47.046 15:51:17 -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:32:47.046 15:51:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:47.046 15:51:17 -- common/autotest_common.sh@10 -- # set +x 00:32:47.046 15:51:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:47.046 15:51:17 -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:32:47.046 15:51:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:47.046 15:51:17 -- common/autotest_common.sh@10 -- # set +x 00:32:47.046 15:51:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:47.046 15:51:17 -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:32:47.046 15:51:17 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:47.046 15:51:17 -- common/autotest_common.sh@10 -- # set +x 00:32:47.046 [2024-04-26 15:51:17.289704] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:47.046 15:51:17 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:47.046 15:51:17 -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:32:47.046 15:51:17 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:32:47.046 15:51:17 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:32:47.046 15:51:17 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:32:47.046 15:51:17 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:32:47.046 15:51:17 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:32:47.046 15:51:17 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:32:47.046 15:51:17 -- target/abort_qd_sizes.sh@24 -- # local target r 00:32:47.046 15:51:17 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:32:47.046 15:51:17 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:47.046 15:51:17 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:32:47.046 15:51:17 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:47.046 15:51:17 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:32:47.046 15:51:17 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:47.046 15:51:17 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:32:47.046 15:51:17 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:47.046 15:51:17 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:32:47.046 15:51:17 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:32:47.046 15:51:17 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:47.046 15:51:17 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:47.046 15:51:17 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:50.331 Initializing NVMe Controllers 00:32:50.331 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:50.331 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:50.331 Initialization complete. Launching workers. 00:32:50.331 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 11233, failed: 0 00:32:50.331 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1122, failed to submit 10111 00:32:50.331 success 772, unsuccess 350, failed 0 00:32:50.331 15:51:20 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:50.332 15:51:20 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:53.671 Initializing NVMe Controllers 00:32:53.671 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:53.671 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:53.671 Initialization complete. Launching workers. 00:32:53.671 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 5974, failed: 0 00:32:53.671 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1284, failed to submit 4690 00:32:53.671 success 244, unsuccess 1040, failed 0 00:32:53.671 15:51:23 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:32:53.671 15:51:23 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:56.960 Initializing NVMe Controllers 00:32:56.960 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:32:56.960 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:32:56.960 Initialization complete. Launching workers. 00:32:56.960 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 29873, failed: 0 00:32:56.960 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2580, failed to submit 27293 00:32:56.960 success 488, unsuccess 2092, failed 0 00:32:56.960 15:51:27 -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:32:56.960 15:51:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:56.960 15:51:27 -- common/autotest_common.sh@10 -- # set +x 00:32:56.960 15:51:27 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:56.960 15:51:27 -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:32:56.960 15:51:27 -- common/autotest_common.sh@549 -- # xtrace_disable 00:32:56.960 15:51:27 -- common/autotest_common.sh@10 -- # set +x 00:32:59.495 15:51:29 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:32:59.495 15:51:29 -- target/abort_qd_sizes.sh@61 -- # killprocess 91609 00:32:59.495 15:51:29 -- common/autotest_common.sh@936 -- # '[' -z 91609 ']' 00:32:59.495 15:51:29 -- common/autotest_common.sh@940 -- # kill -0 91609 00:32:59.495 15:51:29 -- common/autotest_common.sh@941 -- # uname 00:32:59.496 15:51:29 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:32:59.496 15:51:29 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 91609 00:32:59.496 killing process with pid 91609 00:32:59.496 15:51:29 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:32:59.496 15:51:29 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:32:59.496 15:51:29 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 91609' 00:32:59.496 15:51:29 -- common/autotest_common.sh@955 -- # kill 91609 00:32:59.496 15:51:29 -- common/autotest_common.sh@960 -- # wait 91609 00:32:59.496 00:32:59.496 real 0m12.460s 00:32:59.496 user 0m48.716s 00:32:59.496 sys 0m1.730s 00:32:59.496 ************************************ 00:32:59.496 END TEST spdk_target_abort 00:32:59.496 ************************************ 00:32:59.496 15:51:29 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:32:59.496 15:51:29 -- common/autotest_common.sh@10 -- # set +x 00:32:59.496 15:51:29 -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:32:59.496 15:51:29 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:32:59.496 15:51:29 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:32:59.496 15:51:29 -- common/autotest_common.sh@10 -- # set +x 00:32:59.496 ************************************ 00:32:59.496 START TEST kernel_target_abort 00:32:59.496 ************************************ 00:32:59.496 15:51:29 -- common/autotest_common.sh@1111 -- # kernel_target 00:32:59.496 15:51:29 -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:32:59.496 15:51:29 -- nvmf/common.sh@717 -- # local ip 00:32:59.496 15:51:29 -- nvmf/common.sh@718 -- # ip_candidates=() 00:32:59.496 15:51:29 -- nvmf/common.sh@718 -- # local -A ip_candidates 00:32:59.496 15:51:29 -- nvmf/common.sh@720 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:59.496 15:51:29 -- nvmf/common.sh@721 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:59.496 15:51:29 -- nvmf/common.sh@723 -- # [[ -z tcp ]] 00:32:59.496 15:51:29 -- nvmf/common.sh@723 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:59.496 15:51:29 -- nvmf/common.sh@724 -- # ip=NVMF_INITIATOR_IP 00:32:59.496 15:51:29 -- nvmf/common.sh@726 -- # [[ -z 10.0.0.1 ]] 00:32:59.496 15:51:29 -- nvmf/common.sh@731 -- # echo 10.0.0.1 00:32:59.496 15:51:29 -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:32:59.496 15:51:29 -- nvmf/common.sh@621 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:32:59.496 15:51:29 -- nvmf/common.sh@623 -- # nvmet=/sys/kernel/config/nvmet 00:32:59.496 15:51:29 -- nvmf/common.sh@624 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:59.496 15:51:29 -- nvmf/common.sh@625 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:59.496 15:51:29 -- nvmf/common.sh@626 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:59.496 15:51:29 -- nvmf/common.sh@628 -- # local block nvme 00:32:59.496 15:51:29 -- nvmf/common.sh@630 -- # [[ ! -e /sys/module/nvmet ]] 00:32:59.496 15:51:29 -- nvmf/common.sh@631 -- # modprobe nvmet 00:32:59.754 15:51:29 -- nvmf/common.sh@634 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:59.754 15:51:29 -- nvmf/common.sh@636 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:33:00.012 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:00.012 Waiting for block devices as requested 00:33:00.012 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:33:00.012 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:33:00.271 15:51:30 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:33:00.271 15:51:30 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n1 ]] 00:33:00.271 15:51:30 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n1 00:33:00.271 15:51:30 -- common/autotest_common.sh@1648 -- # local device=nvme0n1 00:33:00.271 15:51:30 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:33:00.271 15:51:30 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:33:00.271 15:51:30 -- nvmf/common.sh@642 -- # block_in_use nvme0n1 00:33:00.271 15:51:30 -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:33:00.271 15:51:30 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n1 00:33:00.271 No valid GPT data, bailing 00:33:00.271 15:51:30 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:33:00.271 15:51:30 -- scripts/common.sh@391 -- # pt= 00:33:00.271 15:51:30 -- scripts/common.sh@392 -- # return 1 00:33:00.271 15:51:30 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n1 00:33:00.271 15:51:30 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:33:00.271 15:51:30 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n2 ]] 00:33:00.271 15:51:30 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n2 00:33:00.271 15:51:30 -- common/autotest_common.sh@1648 -- # local device=nvme0n2 00:33:00.271 15:51:30 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:33:00.271 15:51:30 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:33:00.271 15:51:30 -- nvmf/common.sh@642 -- # block_in_use nvme0n2 00:33:00.271 15:51:30 -- scripts/common.sh@378 -- # local block=nvme0n2 pt 00:33:00.271 15:51:30 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n2 00:33:00.271 No valid GPT data, bailing 00:33:00.271 15:51:30 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n2 00:33:00.271 15:51:30 -- scripts/common.sh@391 -- # pt= 00:33:00.271 15:51:30 -- scripts/common.sh@392 -- # return 1 00:33:00.271 15:51:30 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n2 00:33:00.271 15:51:30 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:33:00.271 15:51:30 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme0n3 ]] 00:33:00.271 15:51:30 -- nvmf/common.sh@641 -- # is_block_zoned nvme0n3 00:33:00.271 15:51:30 -- common/autotest_common.sh@1648 -- # local device=nvme0n3 00:33:00.271 15:51:30 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:33:00.271 15:51:30 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:33:00.271 15:51:30 -- nvmf/common.sh@642 -- # block_in_use nvme0n3 00:33:00.271 15:51:30 -- scripts/common.sh@378 -- # local block=nvme0n3 pt 00:33:00.271 15:51:30 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme0n3 00:33:00.271 No valid GPT data, bailing 00:33:00.530 15:51:30 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n3 00:33:00.530 15:51:30 -- scripts/common.sh@391 -- # pt= 00:33:00.530 15:51:30 -- scripts/common.sh@392 -- # return 1 00:33:00.530 15:51:30 -- nvmf/common.sh@642 -- # nvme=/dev/nvme0n3 00:33:00.530 15:51:30 -- nvmf/common.sh@639 -- # for block in /sys/block/nvme* 00:33:00.530 15:51:30 -- nvmf/common.sh@640 -- # [[ -e /sys/block/nvme1n1 ]] 00:33:00.530 15:51:30 -- nvmf/common.sh@641 -- # is_block_zoned nvme1n1 00:33:00.530 15:51:30 -- common/autotest_common.sh@1648 -- # local device=nvme1n1 00:33:00.530 15:51:30 -- common/autotest_common.sh@1650 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:33:00.530 15:51:30 -- common/autotest_common.sh@1651 -- # [[ none != none ]] 00:33:00.530 15:51:30 -- nvmf/common.sh@642 -- # block_in_use nvme1n1 00:33:00.530 15:51:30 -- scripts/common.sh@378 -- # local block=nvme1n1 pt 00:33:00.530 15:51:30 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py nvme1n1 00:33:00.530 No valid GPT data, bailing 00:33:00.530 15:51:30 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:33:00.530 15:51:30 -- scripts/common.sh@391 -- # pt= 00:33:00.530 15:51:30 -- scripts/common.sh@392 -- # return 1 00:33:00.530 15:51:30 -- nvmf/common.sh@642 -- # nvme=/dev/nvme1n1 00:33:00.530 15:51:30 -- nvmf/common.sh@645 -- # [[ -b /dev/nvme1n1 ]] 00:33:00.530 15:51:30 -- nvmf/common.sh@647 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:00.530 15:51:30 -- nvmf/common.sh@648 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:00.530 15:51:30 -- nvmf/common.sh@649 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:33:00.530 15:51:30 -- nvmf/common.sh@654 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:33:00.530 15:51:30 -- nvmf/common.sh@656 -- # echo 1 00:33:00.530 15:51:30 -- nvmf/common.sh@657 -- # echo /dev/nvme1n1 00:33:00.530 15:51:30 -- nvmf/common.sh@658 -- # echo 1 00:33:00.530 15:51:30 -- nvmf/common.sh@660 -- # echo 10.0.0.1 00:33:00.530 15:51:30 -- nvmf/common.sh@661 -- # echo tcp 00:33:00.530 15:51:30 -- nvmf/common.sh@662 -- # echo 4420 00:33:00.530 15:51:30 -- nvmf/common.sh@663 -- # echo ipv4 00:33:00.530 15:51:30 -- nvmf/common.sh@666 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:33:00.530 15:51:30 -- nvmf/common.sh@669 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 --hostid=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 -a 10.0.0.1 -t tcp -s 4420 00:33:00.530 00:33:00.530 Discovery Log Number of Records 2, Generation counter 2 00:33:00.530 =====Discovery Log Entry 0====== 00:33:00.530 trtype: tcp 00:33:00.530 adrfam: ipv4 00:33:00.530 subtype: current discovery subsystem 00:33:00.530 treq: not specified, sq flow control disable supported 00:33:00.530 portid: 1 00:33:00.530 trsvcid: 4420 00:33:00.530 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:33:00.530 traddr: 10.0.0.1 00:33:00.530 eflags: none 00:33:00.530 sectype: none 00:33:00.530 =====Discovery Log Entry 1====== 00:33:00.530 trtype: tcp 00:33:00.530 adrfam: ipv4 00:33:00.530 subtype: nvme subsystem 00:33:00.530 treq: not specified, sq flow control disable supported 00:33:00.530 portid: 1 00:33:00.530 trsvcid: 4420 00:33:00.530 subnqn: nqn.2016-06.io.spdk:testnqn 00:33:00.530 traddr: 10.0.0.1 00:33:00.530 eflags: none 00:33:00.530 sectype: none 00:33:00.530 15:51:30 -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:33:00.530 15:51:30 -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:33:00.530 15:51:30 -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:33:00.530 15:51:30 -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:33:00.530 15:51:30 -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:33:00.530 15:51:30 -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:33:00.530 15:51:30 -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:33:00.530 15:51:30 -- target/abort_qd_sizes.sh@24 -- # local target r 00:33:00.530 15:51:30 -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:33:00.530 15:51:30 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:00.530 15:51:30 -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:33:00.530 15:51:30 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:00.530 15:51:30 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:33:00.530 15:51:30 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:00.530 15:51:30 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:33:00.530 15:51:30 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:00.530 15:51:30 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:33:00.530 15:51:30 -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:33:00.530 15:51:30 -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:00.530 15:51:30 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:00.530 15:51:30 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:03.812 Initializing NVMe Controllers 00:33:03.812 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:03.812 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:03.812 Initialization complete. Launching workers. 00:33:03.812 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 33252, failed: 0 00:33:03.812 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 33252, failed to submit 0 00:33:03.812 success 0, unsuccess 33252, failed 0 00:33:03.812 15:51:33 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:03.812 15:51:33 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:07.110 Initializing NVMe Controllers 00:33:07.110 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:07.110 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:07.110 Initialization complete. Launching workers. 00:33:07.110 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 67793, failed: 0 00:33:07.110 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 28995, failed to submit 38798 00:33:07.110 success 0, unsuccess 28995, failed 0 00:33:07.110 15:51:37 -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:33:07.110 15:51:37 -- target/abort_qd_sizes.sh@34 -- # /home/vagrant/spdk_repo/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:33:10.408 Initializing NVMe Controllers 00:33:10.408 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:33:10.408 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:33:10.408 Initialization complete. Launching workers. 00:33:10.408 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 76308, failed: 0 00:33:10.408 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 19055, failed to submit 57253 00:33:10.408 success 0, unsuccess 19055, failed 0 00:33:10.408 15:51:40 -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:33:10.408 15:51:40 -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:33:10.408 15:51:40 -- nvmf/common.sh@675 -- # echo 0 00:33:10.408 15:51:40 -- nvmf/common.sh@677 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:10.408 15:51:40 -- nvmf/common.sh@678 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:33:10.408 15:51:40 -- nvmf/common.sh@679 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:10.408 15:51:40 -- nvmf/common.sh@680 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:33:10.408 15:51:40 -- nvmf/common.sh@682 -- # modules=(/sys/module/nvmet/holders/*) 00:33:10.408 15:51:40 -- nvmf/common.sh@684 -- # modprobe -r nvmet_tcp nvmet 00:33:10.408 15:51:40 -- nvmf/common.sh@687 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:33:10.972 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:12.871 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:33:12.871 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:33:12.871 00:33:12.871 real 0m13.061s 00:33:12.871 user 0m6.139s 00:33:12.871 sys 0m4.264s 00:33:12.871 15:51:42 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:12.871 15:51:42 -- common/autotest_common.sh@10 -- # set +x 00:33:12.871 ************************************ 00:33:12.871 END TEST kernel_target_abort 00:33:12.871 ************************************ 00:33:12.871 15:51:42 -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:33:12.871 15:51:42 -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:33:12.871 15:51:42 -- nvmf/common.sh@477 -- # nvmfcleanup 00:33:12.871 15:51:42 -- nvmf/common.sh@117 -- # sync 00:33:12.871 15:51:42 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:12.871 15:51:42 -- nvmf/common.sh@120 -- # set +e 00:33:12.871 15:51:42 -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:12.871 15:51:42 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:12.871 rmmod nvme_tcp 00:33:12.871 rmmod nvme_fabrics 00:33:12.871 rmmod nvme_keyring 00:33:12.871 15:51:42 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:12.871 15:51:42 -- nvmf/common.sh@124 -- # set -e 00:33:12.871 15:51:42 -- nvmf/common.sh@125 -- # return 0 00:33:12.871 15:51:42 -- nvmf/common.sh@478 -- # '[' -n 91609 ']' 00:33:12.871 15:51:42 -- nvmf/common.sh@479 -- # killprocess 91609 00:33:12.871 15:51:42 -- common/autotest_common.sh@936 -- # '[' -z 91609 ']' 00:33:12.871 15:51:42 -- common/autotest_common.sh@940 -- # kill -0 91609 00:33:12.871 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 940: kill: (91609) - No such process 00:33:12.871 Process with pid 91609 is not found 00:33:12.871 15:51:42 -- common/autotest_common.sh@963 -- # echo 'Process with pid 91609 is not found' 00:33:12.871 15:51:42 -- nvmf/common.sh@481 -- # '[' iso == iso ']' 00:33:12.871 15:51:42 -- nvmf/common.sh@482 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:33:13.134 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:13.134 Waiting for block devices as requested 00:33:13.134 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:33:13.396 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:33:13.396 15:51:43 -- nvmf/common.sh@484 -- # [[ tcp == \t\c\p ]] 00:33:13.396 15:51:43 -- nvmf/common.sh@485 -- # nvmf_tcp_fini 00:33:13.396 15:51:43 -- nvmf/common.sh@274 -- # [[ nvmf_tgt_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:13.396 15:51:43 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:13.396 15:51:43 -- nvmf/common.sh@617 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:13.396 15:51:43 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:33:13.396 15:51:43 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:13.396 15:51:43 -- nvmf/common.sh@279 -- # ip -4 addr flush nvmf_init_if 00:33:13.396 00:33:13.396 real 0m28.974s 00:33:13.396 user 0m56.138s 00:33:13.396 sys 0m7.429s 00:33:13.396 15:51:43 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:13.396 15:51:43 -- common/autotest_common.sh@10 -- # set +x 00:33:13.396 ************************************ 00:33:13.396 END TEST nvmf_abort_qd_sizes 00:33:13.396 ************************************ 00:33:13.396 15:51:43 -- spdk/autotest.sh@293 -- # run_test keyring_file /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:33:13.396 15:51:43 -- common/autotest_common.sh@1087 -- # '[' 2 -le 1 ']' 00:33:13.396 15:51:43 -- common/autotest_common.sh@1093 -- # xtrace_disable 00:33:13.396 15:51:43 -- common/autotest_common.sh@10 -- # set +x 00:33:13.396 ************************************ 00:33:13.396 START TEST keyring_file 00:33:13.396 ************************************ 00:33:13.396 15:51:43 -- common/autotest_common.sh@1111 -- # /home/vagrant/spdk_repo/spdk/test/keyring/file.sh 00:33:13.655 * Looking for test storage... 00:33:13.655 * Found test storage at /home/vagrant/spdk_repo/spdk/test/keyring 00:33:13.655 15:51:43 -- keyring/file.sh@11 -- # source /home/vagrant/spdk_repo/spdk/test/keyring/common.sh 00:33:13.655 15:51:43 -- keyring/common.sh@4 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:33:13.655 15:51:43 -- nvmf/common.sh@7 -- # uname -s 00:33:13.655 15:51:43 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:13.655 15:51:43 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:13.655 15:51:43 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:13.655 15:51:43 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:13.655 15:51:43 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:13.655 15:51:43 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:13.655 15:51:43 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:13.655 15:51:43 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:13.655 15:51:43 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:13.655 15:51:43 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:13.655 15:51:43 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:33:13.655 15:51:43 -- nvmf/common.sh@18 -- # NVME_HOSTID=77f885f1-61b5-4bed-a5a2-ea12e8a4ade9 00:33:13.655 15:51:43 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:13.655 15:51:43 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:13.655 15:51:43 -- nvmf/common.sh@21 -- # NET_TYPE=virt 00:33:13.655 15:51:43 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:13.655 15:51:43 -- nvmf/common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:33:13.655 15:51:43 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:13.655 15:51:43 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:13.655 15:51:43 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:13.655 15:51:43 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:13.655 15:51:43 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:13.655 15:51:43 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:13.655 15:51:43 -- paths/export.sh@5 -- # export PATH 00:33:13.655 15:51:43 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:13.655 15:51:43 -- nvmf/common.sh@47 -- # : 0 00:33:13.655 15:51:43 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:13.655 15:51:43 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:13.655 15:51:43 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:13.655 15:51:43 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:13.655 15:51:43 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:13.655 15:51:43 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:13.655 15:51:43 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:13.655 15:51:43 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:13.655 15:51:43 -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:33:13.655 15:51:43 -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:33:13.655 15:51:43 -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:33:13.655 15:51:43 -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:33:13.655 15:51:43 -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:33:13.655 15:51:43 -- keyring/file.sh@24 -- # trap cleanup EXIT 00:33:13.655 15:51:43 -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:33:13.655 15:51:43 -- keyring/common.sh@15 -- # local name key digest path 00:33:13.655 15:51:43 -- keyring/common.sh@17 -- # name=key0 00:33:13.655 15:51:43 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:13.655 15:51:43 -- keyring/common.sh@17 -- # digest=0 00:33:13.655 15:51:43 -- keyring/common.sh@18 -- # mktemp 00:33:13.655 15:51:43 -- keyring/common.sh@18 -- # path=/tmp/tmp.C8k1P3Kh4b 00:33:13.655 15:51:43 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:13.655 15:51:43 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:13.655 15:51:43 -- nvmf/common.sh@691 -- # local prefix key digest 00:33:13.655 15:51:43 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:33:13.655 15:51:43 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:33:13.655 15:51:43 -- nvmf/common.sh@693 -- # digest=0 00:33:13.655 15:51:43 -- nvmf/common.sh@694 -- # python - 00:33:13.655 15:51:43 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.C8k1P3Kh4b 00:33:13.655 15:51:43 -- keyring/common.sh@23 -- # echo /tmp/tmp.C8k1P3Kh4b 00:33:13.655 15:51:43 -- keyring/file.sh@26 -- # key0path=/tmp/tmp.C8k1P3Kh4b 00:33:13.655 15:51:43 -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:33:13.655 15:51:43 -- keyring/common.sh@15 -- # local name key digest path 00:33:13.655 15:51:43 -- keyring/common.sh@17 -- # name=key1 00:33:13.655 15:51:43 -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:33:13.655 15:51:43 -- keyring/common.sh@17 -- # digest=0 00:33:13.655 15:51:43 -- keyring/common.sh@18 -- # mktemp 00:33:13.655 15:51:43 -- keyring/common.sh@18 -- # path=/tmp/tmp.SGpQ3c3Jyj 00:33:13.655 15:51:43 -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:33:13.655 15:51:43 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:33:13.655 15:51:43 -- nvmf/common.sh@691 -- # local prefix key digest 00:33:13.655 15:51:43 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:33:13.655 15:51:43 -- nvmf/common.sh@693 -- # key=112233445566778899aabbccddeeff00 00:33:13.655 15:51:43 -- nvmf/common.sh@693 -- # digest=0 00:33:13.655 15:51:43 -- nvmf/common.sh@694 -- # python - 00:33:13.655 15:51:43 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.SGpQ3c3Jyj 00:33:13.655 15:51:43 -- keyring/common.sh@23 -- # echo /tmp/tmp.SGpQ3c3Jyj 00:33:13.655 15:51:43 -- keyring/file.sh@27 -- # key1path=/tmp/tmp.SGpQ3c3Jyj 00:33:13.655 15:51:43 -- keyring/file.sh@30 -- # tgtpid=92509 00:33:13.655 15:51:43 -- keyring/file.sh@29 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:33:13.655 15:51:43 -- keyring/file.sh@32 -- # waitforlisten 92509 00:33:13.655 15:51:43 -- common/autotest_common.sh@817 -- # '[' -z 92509 ']' 00:33:13.655 15:51:43 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:13.655 15:51:43 -- common/autotest_common.sh@822 -- # local max_retries=100 00:33:13.655 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:13.655 15:51:43 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:13.655 15:51:43 -- common/autotest_common.sh@826 -- # xtrace_disable 00:33:13.655 15:51:43 -- common/autotest_common.sh@10 -- # set +x 00:33:13.914 [2024-04-26 15:51:44.002063] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:33:13.914 [2024-04-26 15:51:44.002231] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92509 ] 00:33:13.914 [2024-04-26 15:51:44.141259] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:14.172 [2024-04-26 15:51:44.274432] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 0 00:33:15.126 15:51:45 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:33:15.126 15:51:45 -- common/autotest_common.sh@850 -- # return 0 00:33:15.126 15:51:45 -- keyring/file.sh@33 -- # rpc_cmd 00:33:15.126 15:51:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:15.126 15:51:45 -- common/autotest_common.sh@10 -- # set +x 00:33:15.126 [2024-04-26 15:51:45.052419] tcp.c: 670:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:15.126 null0 00:33:15.126 [2024-04-26 15:51:45.084332] tcp.c: 926:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:33:15.126 [2024-04-26 15:51:45.084593] tcp.c: 965:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:33:15.126 [2024-04-26 15:51:45.092327] tcp.c:3655:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:33:15.126 15:51:45 -- common/autotest_common.sh@577 -- # [[ 0 == 0 ]] 00:33:15.126 15:51:45 -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:15.126 15:51:45 -- common/autotest_common.sh@638 -- # local es=0 00:33:15.126 15:51:45 -- common/autotest_common.sh@640 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:15.126 15:51:45 -- common/autotest_common.sh@626 -- # local arg=rpc_cmd 00:33:15.126 15:51:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:15.126 15:51:45 -- common/autotest_common.sh@630 -- # type -t rpc_cmd 00:33:15.126 15:51:45 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:15.126 15:51:45 -- common/autotest_common.sh@641 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:33:15.126 15:51:45 -- common/autotest_common.sh@549 -- # xtrace_disable 00:33:15.126 15:51:45 -- common/autotest_common.sh@10 -- # set +x 00:33:15.126 [2024-04-26 15:51:45.108346] nvmf_rpc.c: 769:nvmf_rpc_listen_paused: *ERROR*: A listener already exists with different secure channel option.2024/04/26 15:51:45 error on JSON-RPC call, method: nvmf_subsystem_add_listener, params: map[listen_address:map[traddr:127.0.0.1 trsvcid:4420 trtype:tcp] nqn:nqn.2016-06.io.spdk:cnode0 secure_channel:%!s(bool=false)], err: error received for nvmf_subsystem_add_listener method, err: Code=-32602 Msg=Invalid parameters 00:33:15.126 request: 00:33:15.126 { 00:33:15.126 "method": "nvmf_subsystem_add_listener", 00:33:15.126 "params": { 00:33:15.126 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:33:15.126 "secure_channel": false, 00:33:15.126 "listen_address": { 00:33:15.126 "trtype": "tcp", 00:33:15.126 "traddr": "127.0.0.1", 00:33:15.126 "trsvcid": "4420" 00:33:15.126 } 00:33:15.126 } 00:33:15.126 } 00:33:15.126 Got JSON-RPC error response 00:33:15.126 GoRPCClient: error on JSON-RPC call 00:33:15.126 15:51:45 -- common/autotest_common.sh@577 -- # [[ 1 == 0 ]] 00:33:15.127 15:51:45 -- common/autotest_common.sh@641 -- # es=1 00:33:15.127 15:51:45 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:33:15.127 15:51:45 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:33:15.127 15:51:45 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:33:15.127 15:51:45 -- keyring/file.sh@46 -- # bperfpid=92543 00:33:15.127 15:51:45 -- keyring/file.sh@48 -- # waitforlisten 92543 /var/tmp/bperf.sock 00:33:15.127 15:51:45 -- keyring/file.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:33:15.127 15:51:45 -- common/autotest_common.sh@817 -- # '[' -z 92543 ']' 00:33:15.127 15:51:45 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:15.127 15:51:45 -- common/autotest_common.sh@822 -- # local max_retries=100 00:33:15.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:15.127 15:51:45 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:15.127 15:51:45 -- common/autotest_common.sh@826 -- # xtrace_disable 00:33:15.127 15:51:45 -- common/autotest_common.sh@10 -- # set +x 00:33:15.127 [2024-04-26 15:51:45.172570] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:33:15.127 [2024-04-26 15:51:45.172662] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid92543 ] 00:33:15.127 [2024-04-26 15:51:45.308955] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:15.387 [2024-04-26 15:51:45.439062] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:15.952 15:51:46 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:33:15.952 15:51:46 -- common/autotest_common.sh@850 -- # return 0 00:33:15.952 15:51:46 -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.C8k1P3Kh4b 00:33:15.952 15:51:46 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.C8k1P3Kh4b 00:33:16.211 15:51:46 -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.SGpQ3c3Jyj 00:33:16.211 15:51:46 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.SGpQ3c3Jyj 00:33:16.470 15:51:46 -- keyring/file.sh@51 -- # get_key key0 00:33:16.470 15:51:46 -- keyring/file.sh@51 -- # jq -r .path 00:33:16.470 15:51:46 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:16.470 15:51:46 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:16.470 15:51:46 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:16.727 15:51:46 -- keyring/file.sh@51 -- # [[ /tmp/tmp.C8k1P3Kh4b == \/\t\m\p\/\t\m\p\.\C\8\k\1\P\3\K\h\4\b ]] 00:33:16.727 15:51:46 -- keyring/file.sh@52 -- # jq -r .path 00:33:16.727 15:51:46 -- keyring/file.sh@52 -- # get_key key1 00:33:16.727 15:51:46 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:16.727 15:51:46 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:16.727 15:51:46 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:16.986 15:51:47 -- keyring/file.sh@52 -- # [[ /tmp/tmp.SGpQ3c3Jyj == \/\t\m\p\/\t\m\p\.\S\G\p\Q\3\c\3\J\y\j ]] 00:33:16.986 15:51:47 -- keyring/file.sh@53 -- # get_refcnt key0 00:33:16.986 15:51:47 -- keyring/common.sh@12 -- # get_key key0 00:33:16.986 15:51:47 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:16.986 15:51:47 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:16.986 15:51:47 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:16.986 15:51:47 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:17.244 15:51:47 -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:33:17.244 15:51:47 -- keyring/file.sh@54 -- # get_refcnt key1 00:33:17.244 15:51:47 -- keyring/common.sh@12 -- # get_key key1 00:33:17.244 15:51:47 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:17.244 15:51:47 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:17.244 15:51:47 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:17.244 15:51:47 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:17.809 15:51:47 -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:33:17.809 15:51:47 -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:17.809 15:51:47 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:17.809 [2024-04-26 15:51:48.030646] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:17.809 nvme0n1 00:33:18.067 15:51:48 -- keyring/file.sh@59 -- # get_refcnt key0 00:33:18.067 15:51:48 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:18.067 15:51:48 -- keyring/common.sh@12 -- # get_key key0 00:33:18.067 15:51:48 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:18.067 15:51:48 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:18.067 15:51:48 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:18.325 15:51:48 -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:33:18.325 15:51:48 -- keyring/file.sh@60 -- # get_refcnt key1 00:33:18.325 15:51:48 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:18.325 15:51:48 -- keyring/common.sh@12 -- # get_key key1 00:33:18.325 15:51:48 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:18.325 15:51:48 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:18.325 15:51:48 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:18.583 15:51:48 -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:33:18.583 15:51:48 -- keyring/file.sh@62 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:18.583 Running I/O for 1 seconds... 00:33:19.960 00:33:19.960 Latency(us) 00:33:19.960 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:19.960 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:33:19.960 nvme0n1 : 1.01 11591.30 45.28 0.00 0.00 11000.35 5987.61 19541.64 00:33:19.960 =================================================================================================================== 00:33:19.960 Total : 11591.30 45.28 0.00 0.00 11000.35 5987.61 19541.64 00:33:19.960 0 00:33:19.960 15:51:49 -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:19.960 15:51:49 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:19.960 15:51:50 -- keyring/file.sh@65 -- # get_refcnt key0 00:33:19.960 15:51:50 -- keyring/common.sh@12 -- # get_key key0 00:33:19.960 15:51:50 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:19.960 15:51:50 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:19.960 15:51:50 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:19.960 15:51:50 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:20.218 15:51:50 -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:33:20.218 15:51:50 -- keyring/file.sh@66 -- # get_refcnt key1 00:33:20.218 15:51:50 -- keyring/common.sh@12 -- # get_key key1 00:33:20.218 15:51:50 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:20.218 15:51:50 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:20.218 15:51:50 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:20.218 15:51:50 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:20.784 15:51:50 -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:33:20.784 15:51:50 -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:20.784 15:51:50 -- common/autotest_common.sh@638 -- # local es=0 00:33:20.784 15:51:50 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:20.784 15:51:50 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:33:20.784 15:51:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:20.784 15:51:50 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:33:20.784 15:51:50 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:20.784 15:51:50 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:20.784 15:51:50 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:33:20.784 [2024-04-26 15:51:51.060588] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:33:20.784 [2024-04-26 15:51:51.061169] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d8710 (107): Transport endpoint is not connected 00:33:20.784 [2024-04-26 15:51:51.062157] nvme_tcp.c:2173:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x7d8710 (9): Bad file descriptor 00:33:20.784 [2024-04-26 15:51:51.063157] nvme_ctrlr.c:4040:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:33:20.784 [2024-04-26 15:51:51.063177] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:33:20.784 [2024-04-26 15:51:51.063187] nvme_ctrlr.c:1041:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:33:20.812 2024/04/26 15:51:51 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 psk:key1 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-32602 Msg=Invalid parameters 00:33:20.812 request: 00:33:20.812 { 00:33:20.812 "method": "bdev_nvme_attach_controller", 00:33:20.812 "params": { 00:33:20.812 "name": "nvme0", 00:33:20.812 "trtype": "tcp", 00:33:20.812 "traddr": "127.0.0.1", 00:33:20.812 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:20.812 "adrfam": "ipv4", 00:33:20.812 "trsvcid": "4420", 00:33:20.812 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:20.812 "psk": "key1" 00:33:20.812 } 00:33:20.812 } 00:33:20.812 Got JSON-RPC error response 00:33:20.812 GoRPCClient: error on JSON-RPC call 00:33:21.070 15:51:51 -- common/autotest_common.sh@641 -- # es=1 00:33:21.070 15:51:51 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:33:21.070 15:51:51 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:33:21.070 15:51:51 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:33:21.070 15:51:51 -- keyring/file.sh@71 -- # get_refcnt key0 00:33:21.070 15:51:51 -- keyring/common.sh@12 -- # get_key key0 00:33:21.070 15:51:51 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:21.070 15:51:51 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:21.070 15:51:51 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:21.070 15:51:51 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:21.070 15:51:51 -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:33:21.070 15:51:51 -- keyring/file.sh@72 -- # get_refcnt key1 00:33:21.070 15:51:51 -- keyring/common.sh@12 -- # get_key key1 00:33:21.070 15:51:51 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:21.070 15:51:51 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:21.070 15:51:51 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:21.070 15:51:51 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:21.636 15:51:51 -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:33:21.636 15:51:51 -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:33:21.636 15:51:51 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:21.636 15:51:51 -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:33:21.636 15:51:51 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:33:22.202 15:51:52 -- keyring/file.sh@77 -- # jq length 00:33:22.202 15:51:52 -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:33:22.202 15:51:52 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:22.202 15:51:52 -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:33:22.202 15:51:52 -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.C8k1P3Kh4b 00:33:22.202 15:51:52 -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.C8k1P3Kh4b 00:33:22.202 15:51:52 -- common/autotest_common.sh@638 -- # local es=0 00:33:22.202 15:51:52 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.C8k1P3Kh4b 00:33:22.202 15:51:52 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:33:22.202 15:51:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:22.202 15:51:52 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:33:22.202 15:51:52 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:22.202 15:51:52 -- common/autotest_common.sh@641 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.C8k1P3Kh4b 00:33:22.202 15:51:52 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.C8k1P3Kh4b 00:33:22.484 [2024-04-26 15:51:52.741682] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.C8k1P3Kh4b': 0100660 00:33:22.484 [2024-04-26 15:51:52.741722] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:33:22.484 2024/04/26 15:51:52 error on JSON-RPC call, method: keyring_file_add_key, params: map[name:key0 path:/tmp/tmp.C8k1P3Kh4b], err: error received for keyring_file_add_key method, err: Code=-1 Msg=Operation not permitted 00:33:22.484 request: 00:33:22.484 { 00:33:22.484 "method": "keyring_file_add_key", 00:33:22.484 "params": { 00:33:22.484 "name": "key0", 00:33:22.484 "path": "/tmp/tmp.C8k1P3Kh4b" 00:33:22.484 } 00:33:22.484 } 00:33:22.484 Got JSON-RPC error response 00:33:22.484 GoRPCClient: error on JSON-RPC call 00:33:22.484 15:51:52 -- common/autotest_common.sh@641 -- # es=1 00:33:22.484 15:51:52 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:33:22.484 15:51:52 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:33:22.484 15:51:52 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:33:22.484 15:51:52 -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.C8k1P3Kh4b 00:33:22.484 15:51:52 -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.C8k1P3Kh4b 00:33:22.485 15:51:52 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.C8k1P3Kh4b 00:33:23.060 15:51:53 -- keyring/file.sh@86 -- # rm -f /tmp/tmp.C8k1P3Kh4b 00:33:23.060 15:51:53 -- keyring/file.sh@88 -- # get_refcnt key0 00:33:23.060 15:51:53 -- keyring/common.sh@12 -- # get_key key0 00:33:23.060 15:51:53 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:23.060 15:51:53 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:23.060 15:51:53 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:23.060 15:51:53 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:23.060 15:51:53 -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:33:23.060 15:51:53 -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:23.060 15:51:53 -- common/autotest_common.sh@638 -- # local es=0 00:33:23.060 15:51:53 -- common/autotest_common.sh@640 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:23.060 15:51:53 -- common/autotest_common.sh@626 -- # local arg=bperf_cmd 00:33:23.060 15:51:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:23.060 15:51:53 -- common/autotest_common.sh@630 -- # type -t bperf_cmd 00:33:23.060 15:51:53 -- common/autotest_common.sh@630 -- # case "$(type -t "$arg")" in 00:33:23.060 15:51:53 -- common/autotest_common.sh@641 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:23.060 15:51:53 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:23.318 [2024-04-26 15:51:53.529863] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.C8k1P3Kh4b': No such file or directory 00:33:23.318 [2024-04-26 15:51:53.529924] nvme_tcp.c:2570:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:33:23.318 [2024-04-26 15:51:53.529950] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:33:23.318 [2024-04-26 15:51:53.529959] nvme.c: 821:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:23.318 [2024-04-26 15:51:53.529968] bdev_nvme.c:6204:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:33:23.318 2024/04/26 15:51:53 error on JSON-RPC call, method: bdev_nvme_attach_controller, params: map[adrfam:ipv4 hostnqn:nqn.2016-06.io.spdk:host0 name:nvme0 psk:key0 subnqn:nqn.2016-06.io.spdk:cnode0 traddr:127.0.0.1 trsvcid:4420 trtype:tcp], err: error received for bdev_nvme_attach_controller method, err: Code=-19 Msg=No such device 00:33:23.318 request: 00:33:23.318 { 00:33:23.318 "method": "bdev_nvme_attach_controller", 00:33:23.318 "params": { 00:33:23.318 "name": "nvme0", 00:33:23.318 "trtype": "tcp", 00:33:23.318 "traddr": "127.0.0.1", 00:33:23.318 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:23.318 "adrfam": "ipv4", 00:33:23.318 "trsvcid": "4420", 00:33:23.318 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:23.318 "psk": "key0" 00:33:23.318 } 00:33:23.318 } 00:33:23.318 Got JSON-RPC error response 00:33:23.318 GoRPCClient: error on JSON-RPC call 00:33:23.318 15:51:53 -- common/autotest_common.sh@641 -- # es=1 00:33:23.318 15:51:53 -- common/autotest_common.sh@649 -- # (( es > 128 )) 00:33:23.318 15:51:53 -- common/autotest_common.sh@660 -- # [[ -n '' ]] 00:33:23.318 15:51:53 -- common/autotest_common.sh@665 -- # (( !es == 0 )) 00:33:23.318 15:51:53 -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:33:23.318 15:51:53 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:23.884 15:51:53 -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:33:23.884 15:51:53 -- keyring/common.sh@15 -- # local name key digest path 00:33:23.884 15:51:53 -- keyring/common.sh@17 -- # name=key0 00:33:23.884 15:51:53 -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:33:23.884 15:51:53 -- keyring/common.sh@17 -- # digest=0 00:33:23.884 15:51:53 -- keyring/common.sh@18 -- # mktemp 00:33:23.884 15:51:53 -- keyring/common.sh@18 -- # path=/tmp/tmp.CWMCBi0wA0 00:33:23.884 15:51:53 -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:33:23.884 15:51:53 -- nvmf/common.sh@704 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:33:23.884 15:51:53 -- nvmf/common.sh@691 -- # local prefix key digest 00:33:23.884 15:51:53 -- nvmf/common.sh@693 -- # prefix=NVMeTLSkey-1 00:33:23.884 15:51:53 -- nvmf/common.sh@693 -- # key=00112233445566778899aabbccddeeff 00:33:23.884 15:51:53 -- nvmf/common.sh@693 -- # digest=0 00:33:23.884 15:51:53 -- nvmf/common.sh@694 -- # python - 00:33:23.884 15:51:53 -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.CWMCBi0wA0 00:33:23.884 15:51:53 -- keyring/common.sh@23 -- # echo /tmp/tmp.CWMCBi0wA0 00:33:23.884 15:51:53 -- keyring/file.sh@95 -- # key0path=/tmp/tmp.CWMCBi0wA0 00:33:23.884 15:51:53 -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.CWMCBi0wA0 00:33:23.884 15:51:53 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.CWMCBi0wA0 00:33:23.884 15:51:54 -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:23.884 15:51:54 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:24.450 nvme0n1 00:33:24.450 15:51:54 -- keyring/file.sh@99 -- # get_refcnt key0 00:33:24.450 15:51:54 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:24.450 15:51:54 -- keyring/common.sh@12 -- # get_key key0 00:33:24.450 15:51:54 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:24.450 15:51:54 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:24.450 15:51:54 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:24.708 15:51:54 -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:33:24.708 15:51:54 -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:33:24.708 15:51:54 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:33:24.967 15:51:55 -- keyring/file.sh@101 -- # get_key key0 00:33:24.967 15:51:55 -- keyring/file.sh@101 -- # jq -r .removed 00:33:24.967 15:51:55 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:24.968 15:51:55 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:24.968 15:51:55 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:25.226 15:51:55 -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:33:25.226 15:51:55 -- keyring/file.sh@102 -- # get_refcnt key0 00:33:25.226 15:51:55 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:25.226 15:51:55 -- keyring/common.sh@12 -- # get_key key0 00:33:25.226 15:51:55 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:25.226 15:51:55 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:25.226 15:51:55 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:25.485 15:51:55 -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:33:25.485 15:51:55 -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:33:25.485 15:51:55 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:33:25.750 15:51:55 -- keyring/file.sh@104 -- # jq length 00:33:25.750 15:51:55 -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:33:25.750 15:51:55 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:26.008 15:51:56 -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:33:26.008 15:51:56 -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.CWMCBi0wA0 00:33:26.008 15:51:56 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.CWMCBi0wA0 00:33:26.266 15:51:56 -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.SGpQ3c3Jyj 00:33:26.266 15:51:56 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.SGpQ3c3Jyj 00:33:26.524 15:51:56 -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:26.524 15:51:56 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:33:26.784 nvme0n1 00:33:26.784 15:51:56 -- keyring/file.sh@112 -- # bperf_cmd save_config 00:33:26.784 15:51:56 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:33:27.043 15:51:57 -- keyring/file.sh@112 -- # config='{ 00:33:27.043 "subsystems": [ 00:33:27.043 { 00:33:27.043 "subsystem": "keyring", 00:33:27.043 "config": [ 00:33:27.043 { 00:33:27.043 "method": "keyring_file_add_key", 00:33:27.043 "params": { 00:33:27.043 "name": "key0", 00:33:27.043 "path": "/tmp/tmp.CWMCBi0wA0" 00:33:27.043 } 00:33:27.043 }, 00:33:27.043 { 00:33:27.043 "method": "keyring_file_add_key", 00:33:27.043 "params": { 00:33:27.043 "name": "key1", 00:33:27.043 "path": "/tmp/tmp.SGpQ3c3Jyj" 00:33:27.043 } 00:33:27.043 } 00:33:27.043 ] 00:33:27.043 }, 00:33:27.043 { 00:33:27.043 "subsystem": "iobuf", 00:33:27.043 "config": [ 00:33:27.043 { 00:33:27.043 "method": "iobuf_set_options", 00:33:27.043 "params": { 00:33:27.043 "large_bufsize": 135168, 00:33:27.043 "large_pool_count": 1024, 00:33:27.043 "small_bufsize": 8192, 00:33:27.043 "small_pool_count": 8192 00:33:27.043 } 00:33:27.043 } 00:33:27.043 ] 00:33:27.043 }, 00:33:27.043 { 00:33:27.043 "subsystem": "sock", 00:33:27.043 "config": [ 00:33:27.043 { 00:33:27.043 "method": "sock_impl_set_options", 00:33:27.043 "params": { 00:33:27.043 "enable_ktls": false, 00:33:27.043 "enable_placement_id": 0, 00:33:27.043 "enable_quickack": false, 00:33:27.043 "enable_recv_pipe": true, 00:33:27.043 "enable_zerocopy_send_client": false, 00:33:27.043 "enable_zerocopy_send_server": true, 00:33:27.043 "impl_name": "posix", 00:33:27.043 "recv_buf_size": 2097152, 00:33:27.043 "send_buf_size": 2097152, 00:33:27.043 "tls_version": 0, 00:33:27.043 "zerocopy_threshold": 0 00:33:27.043 } 00:33:27.043 }, 00:33:27.043 { 00:33:27.043 "method": "sock_impl_set_options", 00:33:27.043 "params": { 00:33:27.043 "enable_ktls": false, 00:33:27.043 "enable_placement_id": 0, 00:33:27.043 "enable_quickack": false, 00:33:27.043 "enable_recv_pipe": true, 00:33:27.043 "enable_zerocopy_send_client": false, 00:33:27.043 "enable_zerocopy_send_server": true, 00:33:27.043 "impl_name": "ssl", 00:33:27.043 "recv_buf_size": 4096, 00:33:27.043 "send_buf_size": 4096, 00:33:27.043 "tls_version": 0, 00:33:27.043 "zerocopy_threshold": 0 00:33:27.043 } 00:33:27.043 } 00:33:27.043 ] 00:33:27.043 }, 00:33:27.043 { 00:33:27.043 "subsystem": "vmd", 00:33:27.043 "config": [] 00:33:27.043 }, 00:33:27.043 { 00:33:27.043 "subsystem": "accel", 00:33:27.043 "config": [ 00:33:27.043 { 00:33:27.043 "method": "accel_set_options", 00:33:27.043 "params": { 00:33:27.043 "buf_count": 2048, 00:33:27.043 "large_cache_size": 16, 00:33:27.043 "sequence_count": 2048, 00:33:27.043 "small_cache_size": 128, 00:33:27.043 "task_count": 2048 00:33:27.043 } 00:33:27.043 } 00:33:27.043 ] 00:33:27.043 }, 00:33:27.043 { 00:33:27.043 "subsystem": "bdev", 00:33:27.043 "config": [ 00:33:27.043 { 00:33:27.043 "method": "bdev_set_options", 00:33:27.043 "params": { 00:33:27.043 "bdev_auto_examine": true, 00:33:27.043 "bdev_io_cache_size": 256, 00:33:27.043 "bdev_io_pool_size": 65535, 00:33:27.043 "iobuf_large_cache_size": 16, 00:33:27.043 "iobuf_small_cache_size": 128 00:33:27.043 } 00:33:27.043 }, 00:33:27.043 { 00:33:27.043 "method": "bdev_raid_set_options", 00:33:27.043 "params": { 00:33:27.043 "process_window_size_kb": 1024 00:33:27.043 } 00:33:27.043 }, 00:33:27.043 { 00:33:27.043 "method": "bdev_iscsi_set_options", 00:33:27.043 "params": { 00:33:27.043 "timeout_sec": 30 00:33:27.043 } 00:33:27.043 }, 00:33:27.043 { 00:33:27.043 "method": "bdev_nvme_set_options", 00:33:27.043 "params": { 00:33:27.043 "action_on_timeout": "none", 00:33:27.043 "allow_accel_sequence": false, 00:33:27.043 "arbitration_burst": 0, 00:33:27.043 "bdev_retry_count": 3, 00:33:27.043 "ctrlr_loss_timeout_sec": 0, 00:33:27.043 "delay_cmd_submit": true, 00:33:27.043 "dhchap_dhgroups": [ 00:33:27.043 "null", 00:33:27.043 "ffdhe2048", 00:33:27.043 "ffdhe3072", 00:33:27.043 "ffdhe4096", 00:33:27.043 "ffdhe6144", 00:33:27.043 "ffdhe8192" 00:33:27.043 ], 00:33:27.043 "dhchap_digests": [ 00:33:27.043 "sha256", 00:33:27.043 "sha384", 00:33:27.043 "sha512" 00:33:27.043 ], 00:33:27.043 "disable_auto_failback": false, 00:33:27.043 "fast_io_fail_timeout_sec": 0, 00:33:27.043 "generate_uuids": false, 00:33:27.043 "high_priority_weight": 0, 00:33:27.043 "io_path_stat": false, 00:33:27.043 "io_queue_requests": 512, 00:33:27.043 "keep_alive_timeout_ms": 10000, 00:33:27.043 "low_priority_weight": 0, 00:33:27.043 "medium_priority_weight": 0, 00:33:27.043 "nvme_adminq_poll_period_us": 10000, 00:33:27.043 "nvme_error_stat": false, 00:33:27.043 "nvme_ioq_poll_period_us": 0, 00:33:27.043 "rdma_cm_event_timeout_ms": 0, 00:33:27.043 "rdma_max_cq_size": 0, 00:33:27.043 "rdma_srq_size": 0, 00:33:27.043 "reconnect_delay_sec": 0, 00:33:27.043 "timeout_admin_us": 0, 00:33:27.043 "timeout_us": 0, 00:33:27.043 "transport_ack_timeout": 0, 00:33:27.043 "transport_retry_count": 4, 00:33:27.043 "transport_tos": 0 00:33:27.043 } 00:33:27.043 }, 00:33:27.043 { 00:33:27.043 "method": "bdev_nvme_attach_controller", 00:33:27.043 "params": { 00:33:27.043 "adrfam": "IPv4", 00:33:27.043 "ctrlr_loss_timeout_sec": 0, 00:33:27.043 "ddgst": false, 00:33:27.043 "fast_io_fail_timeout_sec": 0, 00:33:27.043 "hdgst": false, 00:33:27.043 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:27.043 "name": "nvme0", 00:33:27.043 "prchk_guard": false, 00:33:27.043 "prchk_reftag": false, 00:33:27.043 "psk": "key0", 00:33:27.043 "reconnect_delay_sec": 0, 00:33:27.043 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:27.043 "traddr": "127.0.0.1", 00:33:27.043 "trsvcid": "4420", 00:33:27.043 "trtype": "TCP" 00:33:27.043 } 00:33:27.043 }, 00:33:27.043 { 00:33:27.043 "method": "bdev_nvme_set_hotplug", 00:33:27.043 "params": { 00:33:27.043 "enable": false, 00:33:27.043 "period_us": 100000 00:33:27.043 } 00:33:27.043 }, 00:33:27.043 { 00:33:27.043 "method": "bdev_wait_for_examine" 00:33:27.043 } 00:33:27.043 ] 00:33:27.043 }, 00:33:27.043 { 00:33:27.043 "subsystem": "nbd", 00:33:27.043 "config": [] 00:33:27.043 } 00:33:27.043 ] 00:33:27.043 }' 00:33:27.043 15:51:57 -- keyring/file.sh@114 -- # killprocess 92543 00:33:27.043 15:51:57 -- common/autotest_common.sh@936 -- # '[' -z 92543 ']' 00:33:27.043 15:51:57 -- common/autotest_common.sh@940 -- # kill -0 92543 00:33:27.043 15:51:57 -- common/autotest_common.sh@941 -- # uname 00:33:27.043 15:51:57 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:33:27.043 15:51:57 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92543 00:33:27.043 killing process with pid 92543 00:33:27.043 Received shutdown signal, test time was about 1.000000 seconds 00:33:27.043 00:33:27.044 Latency(us) 00:33:27.044 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:27.044 =================================================================================================================== 00:33:27.044 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:27.044 15:51:57 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:33:27.044 15:51:57 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:33:27.044 15:51:57 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92543' 00:33:27.044 15:51:57 -- common/autotest_common.sh@955 -- # kill 92543 00:33:27.044 15:51:57 -- common/autotest_common.sh@960 -- # wait 92543 00:33:27.611 15:51:57 -- keyring/file.sh@117 -- # bperfpid=93025 00:33:27.611 15:51:57 -- keyring/file.sh@119 -- # waitforlisten 93025 /var/tmp/bperf.sock 00:33:27.611 15:51:57 -- common/autotest_common.sh@817 -- # '[' -z 93025 ']' 00:33:27.611 15:51:57 -- keyring/file.sh@115 -- # echo '{ 00:33:27.611 "subsystems": [ 00:33:27.611 { 00:33:27.611 "subsystem": "keyring", 00:33:27.611 "config": [ 00:33:27.611 { 00:33:27.611 "method": "keyring_file_add_key", 00:33:27.611 "params": { 00:33:27.611 "name": "key0", 00:33:27.611 "path": "/tmp/tmp.CWMCBi0wA0" 00:33:27.611 } 00:33:27.611 }, 00:33:27.611 { 00:33:27.611 "method": "keyring_file_add_key", 00:33:27.611 "params": { 00:33:27.611 "name": "key1", 00:33:27.611 "path": "/tmp/tmp.SGpQ3c3Jyj" 00:33:27.611 } 00:33:27.611 } 00:33:27.611 ] 00:33:27.611 }, 00:33:27.611 { 00:33:27.611 "subsystem": "iobuf", 00:33:27.611 "config": [ 00:33:27.611 { 00:33:27.611 "method": "iobuf_set_options", 00:33:27.611 "params": { 00:33:27.611 "large_bufsize": 135168, 00:33:27.611 "large_pool_count": 1024, 00:33:27.611 "small_bufsize": 8192, 00:33:27.611 "small_pool_count": 8192 00:33:27.611 } 00:33:27.611 } 00:33:27.611 ] 00:33:27.611 }, 00:33:27.611 { 00:33:27.611 "subsystem": "sock", 00:33:27.611 "config": [ 00:33:27.611 { 00:33:27.611 "method": "sock_impl_set_options", 00:33:27.611 "params": { 00:33:27.611 "enable_ktls": false, 00:33:27.611 "enable_placement_id": 0, 00:33:27.611 "enable_quickack": false, 00:33:27.611 "enable_recv_pipe": true, 00:33:27.611 "enable_zerocopy_send_client": false, 00:33:27.611 "enable_zerocopy_send_server": true, 00:33:27.611 "impl_name": "posix", 00:33:27.611 "recv_buf_size": 2097152, 00:33:27.611 "send_buf_size": 2097152, 00:33:27.611 "tls_version": 0, 00:33:27.611 "zerocopy_threshold": 0 00:33:27.611 } 00:33:27.611 }, 00:33:27.611 { 00:33:27.611 "method": "sock_impl_set_options", 00:33:27.611 "params": { 00:33:27.611 "enable_ktls": false, 00:33:27.611 "enable_placement_id": 0, 00:33:27.611 "enable_quickack": false, 00:33:27.611 "enable_recv_pipe": true, 00:33:27.611 "enable_zerocopy_send_client": false, 00:33:27.611 "enable_zerocopy_send_server": true, 00:33:27.611 "impl_name": "ssl", 00:33:27.611 "recv_buf_size": 4096, 00:33:27.611 "send_buf_size": 4096, 00:33:27.611 "tls_version": 0, 00:33:27.611 "zerocopy_threshold": 0 00:33:27.611 } 00:33:27.611 } 00:33:27.611 ] 00:33:27.611 }, 00:33:27.611 { 00:33:27.611 "subsystem": "vmd", 00:33:27.611 "config": [] 00:33:27.611 }, 00:33:27.611 { 00:33:27.611 "subsystem": "accel", 00:33:27.611 "config": [ 00:33:27.611 { 00:33:27.611 "method": "accel_set_options", 00:33:27.611 "params": { 00:33:27.611 "buf_count": 2048, 00:33:27.611 "large_cache_size": 16, 00:33:27.611 "sequence_count": 2048, 00:33:27.611 "small_cache_size": 128, 00:33:27.611 "task_count": 2048 00:33:27.611 } 00:33:27.611 } 00:33:27.611 ] 00:33:27.611 }, 00:33:27.611 { 00:33:27.611 "subsystem": "bdev", 00:33:27.611 "config": [ 00:33:27.611 { 00:33:27.611 "method": "bdev_set_options", 00:33:27.611 "params": { 00:33:27.611 "bdev_auto_examine": true, 00:33:27.611 "bdev_io_cache_size": 256, 00:33:27.611 "bdev_io_pool_size": 65535, 00:33:27.611 "iobuf_large_cache_size": 16, 00:33:27.611 "iobuf_small_cache_size": 128 00:33:27.611 } 00:33:27.611 }, 00:33:27.611 { 00:33:27.611 "method": "bdev_raid_set_options", 00:33:27.611 "params": { 00:33:27.611 "process_window_size_kb": 1024 00:33:27.611 } 00:33:27.611 }, 00:33:27.611 { 00:33:27.611 "method": "bdev_iscsi_set_options", 00:33:27.611 "params": { 00:33:27.611 "timeout_sec": 30 00:33:27.611 } 00:33:27.611 }, 00:33:27.611 { 00:33:27.611 "method": "bdev_nvme_set_options", 00:33:27.611 "params": { 00:33:27.611 "action_on_timeout": "none", 00:33:27.611 "allow_accel_sequence": false, 00:33:27.611 "arbitration_burst": 0, 00:33:27.611 "bdev_retry_count": 3, 00:33:27.611 "ctrlr_loss_timeout_sec": 0, 00:33:27.611 "delay_cmd_submit": true, 00:33:27.611 "dhchap_dhgroups": [ 00:33:27.611 "null", 00:33:27.611 "ffdhe2048", 00:33:27.611 "ffdhe3072", 00:33:27.611 "ffdhe4096", 00:33:27.611 "ffdhe6144", 00:33:27.611 "ffdhe8192" 00:33:27.611 ], 00:33:27.611 "dhchap_digests": [ 00:33:27.611 "sha256", 00:33:27.611 "sha384", 00:33:27.611 "sha512" 00:33:27.611 ], 00:33:27.611 "disable_auto_failback": false, 00:33:27.611 "fast_io_fail_timeout_sec": 0, 00:33:27.611 "generate_uuids": false, 00:33:27.612 "high_priority_weight": 0, 00:33:27.612 "io_path_stat": false, 00:33:27.612 "io_queue_requests": 512, 00:33:27.612 "keep_alive_timeout_ms": 10000, 00:33:27.612 "low_priority_weight": 0, 00:33:27.612 "medium_priority_weight": 0, 00:33:27.612 "nvme_adminq_poll_period_us": 10000, 00:33:27.612 "nvme_error_stat": false, 00:33:27.612 "nvme_ioq_poll_period_us": 0, 00:33:27.612 "rdma_cm_event_timeout_ms": 0, 00:33:27.612 "rdma_max_cq_size": 0, 00:33:27.612 "rdma_srq_size": 0, 00:33:27.612 "reconnect_delay_sec": 0, 00:33:27.612 "timeout_admin_us": 0, 00:33:27.612 "timeout_us": 0, 00:33:27.612 "transport_ack_timeout": 0, 00:33:27.612 "transport_retry_count": 4, 00:33:27.612 "transport_tos": 0 00:33:27.612 } 00:33:27.612 }, 00:33:27.612 { 00:33:27.612 "method": "bdev_nvme_attach_controller", 00:33:27.612 "params": { 00:33:27.612 "adrfam": "IPv4", 00:33:27.612 "ctrlr_loss_timeout_sec": 0, 00:33:27.612 "ddgst": false, 00:33:27.612 "fast_io_fail_timeout_sec": 0, 00:33:27.612 "hdgst": false, 00:33:27.612 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:27.612 "name": "nvme0", 00:33:27.612 "prchk_guard": false, 00:33:27.612 "prchk_reftag": false, 00:33:27.612 "psk": "key0", 00:33:27.612 "reconnect_delay_sec": 0, 00:33:27.612 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:27.612 "traddr": "127.0.0.1", 00:33:27.612 "trsvcid": "4420", 00:33:27.612 "trtype": "TCP" 00:33:27.612 } 00:33:27.612 }, 00:33:27.612 { 00:33:27.612 "method": "bdev_nvme_set_hotplug", 00:33:27.612 "params": { 00:33:27.612 "enable": false, 00:33:27.612 "period_us": 100000 00:33:27.612 } 00:33:27.612 }, 00:33:27.612 { 00:33:27.612 "method": "bdev_wait_for_examine" 00:33:27.612 } 00:33:27.612 ] 00:33:27.612 }, 00:33:27.612 { 00:33:27.612 "subsystem": "nbd", 00:33:27.612 "config": [] 00:33:27.612 } 00:33:27.612 ] 00:33:27.612 }' 00:33:27.612 15:51:57 -- keyring/file.sh@115 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:33:27.612 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:27.612 15:51:57 -- common/autotest_common.sh@821 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:27.612 15:51:57 -- common/autotest_common.sh@822 -- # local max_retries=100 00:33:27.612 15:51:57 -- common/autotest_common.sh@824 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:27.612 15:51:57 -- common/autotest_common.sh@826 -- # xtrace_disable 00:33:27.612 15:51:57 -- common/autotest_common.sh@10 -- # set +x 00:33:27.612 [2024-04-26 15:51:57.656240] Starting SPDK v24.05-pre git sha1 2971e8ff3 / DPDK 23.11.0 initialization... 00:33:27.612 [2024-04-26 15:51:57.656545] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid93025 ] 00:33:27.612 [2024-04-26 15:51:57.794015] app.c: 828:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:27.870 [2024-04-26 15:51:57.913503] reactor.c: 937:reactor_run: *NOTICE*: Reactor started on core 1 00:33:27.870 [2024-04-26 15:51:58.093635] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:33:28.437 15:51:58 -- common/autotest_common.sh@846 -- # (( i == 0 )) 00:33:28.437 15:51:58 -- common/autotest_common.sh@850 -- # return 0 00:33:28.437 15:51:58 -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:33:28.437 15:51:58 -- keyring/file.sh@120 -- # jq length 00:33:28.437 15:51:58 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:28.694 15:51:58 -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:33:28.694 15:51:58 -- keyring/file.sh@121 -- # get_refcnt key0 00:33:28.694 15:51:58 -- keyring/common.sh@12 -- # get_key key0 00:33:28.695 15:51:58 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:28.695 15:51:58 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:28.695 15:51:58 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:33:28.695 15:51:58 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:28.991 15:51:59 -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:33:28.991 15:51:59 -- keyring/file.sh@122 -- # get_refcnt key1 00:33:28.991 15:51:59 -- keyring/common.sh@12 -- # get_key key1 00:33:28.991 15:51:59 -- keyring/common.sh@12 -- # jq -r .refcnt 00:33:28.991 15:51:59 -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:33:28.991 15:51:59 -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:33:28.991 15:51:59 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:33:29.566 15:51:59 -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:33:29.566 15:51:59 -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:33:29.566 15:51:59 -- keyring/file.sh@123 -- # jq -r '.[].name' 00:33:29.567 15:51:59 -- keyring/common.sh@8 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:33:29.824 15:51:59 -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:33:29.824 15:51:59 -- keyring/file.sh@1 -- # cleanup 00:33:29.824 15:51:59 -- keyring/file.sh@19 -- # rm -f /tmp/tmp.CWMCBi0wA0 /tmp/tmp.SGpQ3c3Jyj 00:33:29.824 15:51:59 -- keyring/file.sh@20 -- # killprocess 93025 00:33:29.824 15:51:59 -- common/autotest_common.sh@936 -- # '[' -z 93025 ']' 00:33:29.824 15:51:59 -- common/autotest_common.sh@940 -- # kill -0 93025 00:33:29.824 15:51:59 -- common/autotest_common.sh@941 -- # uname 00:33:29.824 15:51:59 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:33:29.824 15:51:59 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 93025 00:33:29.824 killing process with pid 93025 00:33:29.824 Received shutdown signal, test time was about 1.000000 seconds 00:33:29.824 00:33:29.824 Latency(us) 00:33:29.824 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:29.824 =================================================================================================================== 00:33:29.824 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:33:29.824 15:51:59 -- common/autotest_common.sh@942 -- # process_name=reactor_1 00:33:29.824 15:51:59 -- common/autotest_common.sh@946 -- # '[' reactor_1 = sudo ']' 00:33:29.824 15:51:59 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 93025' 00:33:29.824 15:51:59 -- common/autotest_common.sh@955 -- # kill 93025 00:33:29.824 15:51:59 -- common/autotest_common.sh@960 -- # wait 93025 00:33:30.083 15:52:00 -- keyring/file.sh@21 -- # killprocess 92509 00:33:30.083 15:52:00 -- common/autotest_common.sh@936 -- # '[' -z 92509 ']' 00:33:30.083 15:52:00 -- common/autotest_common.sh@940 -- # kill -0 92509 00:33:30.083 15:52:00 -- common/autotest_common.sh@941 -- # uname 00:33:30.083 15:52:00 -- common/autotest_common.sh@941 -- # '[' Linux = Linux ']' 00:33:30.083 15:52:00 -- common/autotest_common.sh@942 -- # ps --no-headers -o comm= 92509 00:33:30.083 15:52:00 -- common/autotest_common.sh@942 -- # process_name=reactor_0 00:33:30.083 15:52:00 -- common/autotest_common.sh@946 -- # '[' reactor_0 = sudo ']' 00:33:30.083 15:52:00 -- common/autotest_common.sh@954 -- # echo 'killing process with pid 92509' 00:33:30.083 killing process with pid 92509 00:33:30.083 15:52:00 -- common/autotest_common.sh@955 -- # kill 92509 00:33:30.083 [2024-04-26 15:52:00.168671] app.c: 937:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:33:30.083 15:52:00 -- common/autotest_common.sh@960 -- # wait 92509 00:33:30.341 00:33:30.341 real 0m16.930s 00:33:30.341 user 0m41.953s 00:33:30.341 sys 0m3.564s 00:33:30.341 15:52:00 -- common/autotest_common.sh@1112 -- # xtrace_disable 00:33:30.341 ************************************ 00:33:30.341 END TEST keyring_file 00:33:30.341 ************************************ 00:33:30.341 15:52:00 -- common/autotest_common.sh@10 -- # set +x 00:33:30.598 15:52:00 -- spdk/autotest.sh@294 -- # [[ n == y ]] 00:33:30.598 15:52:00 -- spdk/autotest.sh@306 -- # '[' 0 -eq 1 ']' 00:33:30.599 15:52:00 -- spdk/autotest.sh@310 -- # '[' 0 -eq 1 ']' 00:33:30.599 15:52:00 -- spdk/autotest.sh@314 -- # '[' 0 -eq 1 ']' 00:33:30.599 15:52:00 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:33:30.599 15:52:00 -- spdk/autotest.sh@328 -- # '[' 0 -eq 1 ']' 00:33:30.599 15:52:00 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:33:30.599 15:52:00 -- spdk/autotest.sh@337 -- # '[' 0 -eq 1 ']' 00:33:30.599 15:52:00 -- spdk/autotest.sh@341 -- # '[' 0 -eq 1 ']' 00:33:30.599 15:52:00 -- spdk/autotest.sh@345 -- # '[' 0 -eq 1 ']' 00:33:30.599 15:52:00 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:33:30.599 15:52:00 -- spdk/autotest.sh@354 -- # '[' 0 -eq 1 ']' 00:33:30.599 15:52:00 -- spdk/autotest.sh@361 -- # [[ 0 -eq 1 ]] 00:33:30.599 15:52:00 -- spdk/autotest.sh@365 -- # [[ 0 -eq 1 ]] 00:33:30.599 15:52:00 -- spdk/autotest.sh@369 -- # [[ 0 -eq 1 ]] 00:33:30.599 15:52:00 -- spdk/autotest.sh@373 -- # [[ 0 -eq 1 ]] 00:33:30.599 15:52:00 -- spdk/autotest.sh@378 -- # trap - SIGINT SIGTERM EXIT 00:33:30.599 15:52:00 -- spdk/autotest.sh@380 -- # timing_enter post_cleanup 00:33:30.599 15:52:00 -- common/autotest_common.sh@710 -- # xtrace_disable 00:33:30.599 15:52:00 -- common/autotest_common.sh@10 -- # set +x 00:33:30.599 15:52:00 -- spdk/autotest.sh@381 -- # autotest_cleanup 00:33:30.599 15:52:00 -- common/autotest_common.sh@1378 -- # local autotest_es=0 00:33:30.599 15:52:00 -- common/autotest_common.sh@1379 -- # xtrace_disable 00:33:30.599 15:52:00 -- common/autotest_common.sh@10 -- # set +x 00:33:31.978 INFO: APP EXITING 00:33:31.978 INFO: killing all VMs 00:33:31.978 INFO: killing vhost app 00:33:31.978 INFO: EXIT DONE 00:33:32.968 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:32.968 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:33:32.968 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:33:33.532 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:33:33.532 Cleaning 00:33:33.532 Removing: /var/run/dpdk/spdk0/config 00:33:33.532 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:33:33.532 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:33:33.532 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:33:33.532 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:33:33.532 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:33:33.532 Removing: /var/run/dpdk/spdk0/hugepage_info 00:33:33.532 Removing: /var/run/dpdk/spdk1/config 00:33:33.532 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:33:33.532 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:33:33.532 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:33:33.532 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:33:33.532 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:33:33.532 Removing: /var/run/dpdk/spdk1/hugepage_info 00:33:33.532 Removing: /var/run/dpdk/spdk2/config 00:33:33.532 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:33:33.532 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:33:33.532 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:33:33.532 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:33:33.532 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:33:33.532 Removing: /var/run/dpdk/spdk2/hugepage_info 00:33:33.532 Removing: /var/run/dpdk/spdk3/config 00:33:33.532 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:33:33.532 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:33:33.532 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:33:33.532 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:33:33.532 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:33:33.532 Removing: /var/run/dpdk/spdk3/hugepage_info 00:33:33.532 Removing: /var/run/dpdk/spdk4/config 00:33:33.532 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:33:33.532 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:33:33.532 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:33:33.532 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:33:33.532 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:33:33.532 Removing: /var/run/dpdk/spdk4/hugepage_info 00:33:33.532 Removing: /dev/shm/nvmf_trace.0 00:33:33.532 Removing: /dev/shm/spdk_tgt_trace.pid60052 00:33:33.532 Removing: /var/run/dpdk/spdk0 00:33:33.532 Removing: /var/run/dpdk/spdk1 00:33:33.532 Removing: /var/run/dpdk/spdk2 00:33:33.532 Removing: /var/run/dpdk/spdk3 00:33:33.532 Removing: /var/run/dpdk/spdk4 00:33:33.532 Removing: /var/run/dpdk/spdk_pid59884 00:33:33.532 Removing: /var/run/dpdk/spdk_pid60052 00:33:33.532 Removing: /var/run/dpdk/spdk_pid60350 00:33:33.532 Removing: /var/run/dpdk/spdk_pid60452 00:33:33.532 Removing: /var/run/dpdk/spdk_pid60486 00:33:33.532 Removing: /var/run/dpdk/spdk_pid60610 00:33:33.532 Removing: /var/run/dpdk/spdk_pid60640 00:33:33.532 Removing: /var/run/dpdk/spdk_pid60768 00:33:33.532 Removing: /var/run/dpdk/spdk_pid61048 00:33:33.532 Removing: /var/run/dpdk/spdk_pid61229 00:33:33.532 Removing: /var/run/dpdk/spdk_pid61311 00:33:33.532 Removing: /var/run/dpdk/spdk_pid61408 00:33:33.532 Removing: /var/run/dpdk/spdk_pid61508 00:33:33.532 Removing: /var/run/dpdk/spdk_pid61555 00:33:33.790 Removing: /var/run/dpdk/spdk_pid61590 00:33:33.790 Removing: /var/run/dpdk/spdk_pid61658 00:33:33.790 Removing: /var/run/dpdk/spdk_pid61789 00:33:33.790 Removing: /var/run/dpdk/spdk_pid62429 00:33:33.790 Removing: /var/run/dpdk/spdk_pid62497 00:33:33.790 Removing: /var/run/dpdk/spdk_pid62570 00:33:33.790 Removing: /var/run/dpdk/spdk_pid62598 00:33:33.790 Removing: /var/run/dpdk/spdk_pid62682 00:33:33.790 Removing: /var/run/dpdk/spdk_pid62710 00:33:33.790 Removing: /var/run/dpdk/spdk_pid62794 00:33:33.790 Removing: /var/run/dpdk/spdk_pid62822 00:33:33.790 Removing: /var/run/dpdk/spdk_pid62883 00:33:33.790 Removing: /var/run/dpdk/spdk_pid62912 00:33:33.790 Removing: /var/run/dpdk/spdk_pid62963 00:33:33.790 Removing: /var/run/dpdk/spdk_pid62993 00:33:33.790 Removing: /var/run/dpdk/spdk_pid63155 00:33:33.790 Removing: /var/run/dpdk/spdk_pid63195 00:33:33.790 Removing: /var/run/dpdk/spdk_pid63274 00:33:33.790 Removing: /var/run/dpdk/spdk_pid63352 00:33:33.790 Removing: /var/run/dpdk/spdk_pid63386 00:33:33.790 Removing: /var/run/dpdk/spdk_pid63464 00:33:33.790 Removing: /var/run/dpdk/spdk_pid63497 00:33:33.790 Removing: /var/run/dpdk/spdk_pid63541 00:33:33.790 Removing: /var/run/dpdk/spdk_pid63586 00:33:33.790 Removing: /var/run/dpdk/spdk_pid63619 00:33:33.790 Removing: /var/run/dpdk/spdk_pid63663 00:33:33.790 Removing: /var/run/dpdk/spdk_pid63701 00:33:33.790 Removing: /var/run/dpdk/spdk_pid63742 00:33:33.790 Removing: /var/run/dpdk/spdk_pid63785 00:33:33.790 Removing: /var/run/dpdk/spdk_pid63821 00:33:33.790 Removing: /var/run/dpdk/spdk_pid63865 00:33:33.790 Removing: /var/run/dpdk/spdk_pid63904 00:33:33.790 Removing: /var/run/dpdk/spdk_pid63942 00:33:33.790 Removing: /var/run/dpdk/spdk_pid63981 00:33:33.791 Removing: /var/run/dpdk/spdk_pid64020 00:33:33.791 Removing: /var/run/dpdk/spdk_pid64059 00:33:33.791 Removing: /var/run/dpdk/spdk_pid64103 00:33:33.791 Removing: /var/run/dpdk/spdk_pid64144 00:33:33.791 Removing: /var/run/dpdk/spdk_pid64186 00:33:33.791 Removing: /var/run/dpdk/spdk_pid64231 00:33:33.791 Removing: /var/run/dpdk/spdk_pid64270 00:33:33.791 Removing: /var/run/dpdk/spdk_pid64341 00:33:33.791 Removing: /var/run/dpdk/spdk_pid64461 00:33:33.791 Removing: /var/run/dpdk/spdk_pid64900 00:33:33.791 Removing: /var/run/dpdk/spdk_pid68349 00:33:33.791 Removing: /var/run/dpdk/spdk_pid68697 00:33:33.791 Removing: /var/run/dpdk/spdk_pid69907 00:33:33.791 Removing: /var/run/dpdk/spdk_pid70288 00:33:33.791 Removing: /var/run/dpdk/spdk_pid70563 00:33:33.791 Removing: /var/run/dpdk/spdk_pid70603 00:33:33.791 Removing: /var/run/dpdk/spdk_pid71495 00:33:33.791 Removing: /var/run/dpdk/spdk_pid71545 00:33:33.791 Removing: /var/run/dpdk/spdk_pid71928 00:33:33.791 Removing: /var/run/dpdk/spdk_pid72457 00:33:33.791 Removing: /var/run/dpdk/spdk_pid72879 00:33:33.791 Removing: /var/run/dpdk/spdk_pid73848 00:33:33.791 Removing: /var/run/dpdk/spdk_pid74843 00:33:33.791 Removing: /var/run/dpdk/spdk_pid74967 00:33:33.791 Removing: /var/run/dpdk/spdk_pid75035 00:33:33.791 Removing: /var/run/dpdk/spdk_pid76520 00:33:33.791 Removing: /var/run/dpdk/spdk_pid76762 00:33:33.791 Removing: /var/run/dpdk/spdk_pid77209 00:33:33.791 Removing: /var/run/dpdk/spdk_pid77319 00:33:33.791 Removing: /var/run/dpdk/spdk_pid77476 00:33:33.791 Removing: /var/run/dpdk/spdk_pid77508 00:33:33.791 Removing: /var/run/dpdk/spdk_pid77548 00:33:33.791 Removing: /var/run/dpdk/spdk_pid77599 00:33:33.791 Removing: /var/run/dpdk/spdk_pid77757 00:33:33.791 Removing: /var/run/dpdk/spdk_pid77910 00:33:33.791 Removing: /var/run/dpdk/spdk_pid78184 00:33:33.791 Removing: /var/run/dpdk/spdk_pid78301 00:33:33.791 Removing: /var/run/dpdk/spdk_pid78543 00:33:33.791 Removing: /var/run/dpdk/spdk_pid78669 00:33:33.791 Removing: /var/run/dpdk/spdk_pid78808 00:33:33.791 Removing: /var/run/dpdk/spdk_pid79159 00:33:33.791 Removing: /var/run/dpdk/spdk_pid79580 00:33:33.791 Removing: /var/run/dpdk/spdk_pid79886 00:33:33.791 Removing: /var/run/dpdk/spdk_pid80402 00:33:33.791 Removing: /var/run/dpdk/spdk_pid80404 00:33:33.791 Removing: /var/run/dpdk/spdk_pid80750 00:33:33.791 Removing: /var/run/dpdk/spdk_pid80768 00:33:33.791 Removing: /var/run/dpdk/spdk_pid80789 00:33:33.791 Removing: /var/run/dpdk/spdk_pid80814 00:33:33.791 Removing: /var/run/dpdk/spdk_pid80820 00:33:33.791 Removing: /var/run/dpdk/spdk_pid81132 00:33:33.791 Removing: /var/run/dpdk/spdk_pid81175 00:33:34.049 Removing: /var/run/dpdk/spdk_pid81513 00:33:34.049 Removing: /var/run/dpdk/spdk_pid81770 00:33:34.049 Removing: /var/run/dpdk/spdk_pid82266 00:33:34.049 Removing: /var/run/dpdk/spdk_pid82812 00:33:34.049 Removing: /var/run/dpdk/spdk_pid83413 00:33:34.049 Removing: /var/run/dpdk/spdk_pid83415 00:33:34.049 Removing: /var/run/dpdk/spdk_pid85405 00:33:34.049 Removing: /var/run/dpdk/spdk_pid85491 00:33:34.049 Removing: /var/run/dpdk/spdk_pid85589 00:33:34.049 Removing: /var/run/dpdk/spdk_pid85680 00:33:34.049 Removing: /var/run/dpdk/spdk_pid85847 00:33:34.049 Removing: /var/run/dpdk/spdk_pid85937 00:33:34.049 Removing: /var/run/dpdk/spdk_pid86033 00:33:34.049 Removing: /var/run/dpdk/spdk_pid86118 00:33:34.049 Removing: /var/run/dpdk/spdk_pid86473 00:33:34.049 Removing: /var/run/dpdk/spdk_pid87182 00:33:34.049 Removing: /var/run/dpdk/spdk_pid88554 00:33:34.049 Removing: /var/run/dpdk/spdk_pid88764 00:33:34.049 Removing: /var/run/dpdk/spdk_pid89057 00:33:34.049 Removing: /var/run/dpdk/spdk_pid89362 00:33:34.049 Removing: /var/run/dpdk/spdk_pid89916 00:33:34.049 Removing: /var/run/dpdk/spdk_pid89931 00:33:34.049 Removing: /var/run/dpdk/spdk_pid90299 00:33:34.049 Removing: /var/run/dpdk/spdk_pid90462 00:33:34.049 Removing: /var/run/dpdk/spdk_pid90628 00:33:34.049 Removing: /var/run/dpdk/spdk_pid90725 00:33:34.049 Removing: /var/run/dpdk/spdk_pid90881 00:33:34.049 Removing: /var/run/dpdk/spdk_pid90994 00:33:34.049 Removing: /var/run/dpdk/spdk_pid91682 00:33:34.049 Removing: /var/run/dpdk/spdk_pid91717 00:33:34.049 Removing: /var/run/dpdk/spdk_pid91747 00:33:34.049 Removing: /var/run/dpdk/spdk_pid92015 00:33:34.049 Removing: /var/run/dpdk/spdk_pid92047 00:33:34.049 Removing: /var/run/dpdk/spdk_pid92081 00:33:34.049 Removing: /var/run/dpdk/spdk_pid92509 00:33:34.049 Removing: /var/run/dpdk/spdk_pid92543 00:33:34.049 Removing: /var/run/dpdk/spdk_pid93025 00:33:34.049 Clean 00:33:34.049 15:52:04 -- common/autotest_common.sh@1437 -- # return 0 00:33:34.049 15:52:04 -- spdk/autotest.sh@382 -- # timing_exit post_cleanup 00:33:34.049 15:52:04 -- common/autotest_common.sh@716 -- # xtrace_disable 00:33:34.049 15:52:04 -- common/autotest_common.sh@10 -- # set +x 00:33:34.307 15:52:04 -- spdk/autotest.sh@384 -- # timing_exit autotest 00:33:34.307 15:52:04 -- common/autotest_common.sh@716 -- # xtrace_disable 00:33:34.307 15:52:04 -- common/autotest_common.sh@10 -- # set +x 00:33:34.307 15:52:04 -- spdk/autotest.sh@385 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:33:34.307 15:52:04 -- spdk/autotest.sh@387 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:33:34.307 15:52:04 -- spdk/autotest.sh@387 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:33:34.307 15:52:04 -- spdk/autotest.sh@389 -- # hash lcov 00:33:34.307 15:52:04 -- spdk/autotest.sh@389 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:33:34.307 15:52:04 -- spdk/autotest.sh@391 -- # hostname 00:33:34.307 15:52:04 -- spdk/autotest.sh@391 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /home/vagrant/spdk_repo/spdk -t fedora38-cloud-1705279005-2131 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:33:34.565 geninfo: WARNING: invalid characters removed from testname! 00:34:06.680 15:52:31 -- spdk/autotest.sh@392 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:06.680 15:52:35 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:07.676 15:52:37 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:10.205 15:52:40 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:13.559 15:52:43 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:16.083 15:52:45 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:34:18.612 15:52:48 -- spdk/autotest.sh@398 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:34:18.612 15:52:48 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:34:18.612 15:52:48 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:34:18.612 15:52:48 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:18.612 15:52:48 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:18.612 15:52:48 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.612 15:52:48 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.612 15:52:48 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.612 15:52:48 -- paths/export.sh@5 -- $ export PATH 00:34:18.612 15:52:48 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:18.612 15:52:48 -- common/autobuild_common.sh@434 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:34:18.612 15:52:48 -- common/autobuild_common.sh@435 -- $ date +%s 00:34:18.612 15:52:48 -- common/autobuild_common.sh@435 -- $ mktemp -dt spdk_1714146768.XXXXXX 00:34:18.612 15:52:48 -- common/autobuild_common.sh@435 -- $ SPDK_WORKSPACE=/tmp/spdk_1714146768.M6U3gU 00:34:18.612 15:52:48 -- common/autobuild_common.sh@437 -- $ [[ -n '' ]] 00:34:18.612 15:52:48 -- common/autobuild_common.sh@441 -- $ '[' -n '' ']' 00:34:18.612 15:52:48 -- common/autobuild_common.sh@444 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:34:18.612 15:52:48 -- common/autobuild_common.sh@448 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:34:18.612 15:52:48 -- common/autobuild_common.sh@450 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:34:18.612 15:52:48 -- common/autobuild_common.sh@451 -- $ get_config_params 00:34:18.612 15:52:48 -- common/autotest_common.sh@385 -- $ xtrace_disable 00:34:18.612 15:52:48 -- common/autotest_common.sh@10 -- $ set +x 00:34:18.612 15:52:48 -- common/autobuild_common.sh@451 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-usdt --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-avahi --with-golang' 00:34:18.612 15:52:48 -- common/autobuild_common.sh@453 -- $ start_monitor_resources 00:34:18.612 15:52:48 -- pm/common@17 -- $ local monitor 00:34:18.612 15:52:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:18.612 15:52:48 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=94682 00:34:18.612 15:52:48 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:18.612 15:52:48 -- pm/common@23 -- $ MONITOR_RESOURCES_PIDS["$monitor"]=94684 00:34:18.612 15:52:48 -- pm/common@21 -- $ date +%s 00:34:18.612 15:52:48 -- pm/common@26 -- $ sleep 1 00:34:18.612 15:52:48 -- pm/common@21 -- $ date +%s 00:34:18.612 15:52:48 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1714146768 00:34:18.612 15:52:48 -- pm/common@21 -- $ sudo -E /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1714146768 00:34:18.612 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1714146768_collect-vmstat.pm.log 00:34:18.612 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1714146768_collect-cpu-load.pm.log 00:34:19.546 15:52:49 -- common/autobuild_common.sh@454 -- $ trap stop_monitor_resources EXIT 00:34:19.546 15:52:49 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:34:19.546 15:52:49 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:34:19.546 15:52:49 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:34:19.546 15:52:49 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:34:19.546 15:52:49 -- spdk/autopackage.sh@19 -- $ timing_finish 00:34:19.546 15:52:49 -- common/autotest_common.sh@722 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:34:19.546 15:52:49 -- common/autotest_common.sh@723 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:34:19.546 15:52:49 -- common/autotest_common.sh@725 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:34:19.546 15:52:49 -- spdk/autopackage.sh@20 -- $ exit 0 00:34:19.546 15:52:49 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:34:19.546 15:52:49 -- pm/common@30 -- $ signal_monitor_resources TERM 00:34:19.546 15:52:49 -- pm/common@41 -- $ local monitor pid pids signal=TERM 00:34:19.546 15:52:49 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:19.546 15:52:49 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:34:19.546 15:52:49 -- pm/common@45 -- $ pid=94691 00:34:19.546 15:52:49 -- pm/common@52 -- $ sudo kill -TERM 94691 00:34:19.546 15:52:49 -- pm/common@43 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:34:19.546 15:52:49 -- pm/common@44 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:34:19.546 15:52:49 -- pm/common@45 -- $ pid=94692 00:34:19.546 15:52:49 -- pm/common@52 -- $ sudo kill -TERM 94692 00:34:19.546 + [[ -n 5163 ]] 00:34:19.546 + sudo kill 5163 00:34:19.555 [Pipeline] } 00:34:19.574 [Pipeline] // timeout 00:34:19.579 [Pipeline] } 00:34:19.597 [Pipeline] // stage 00:34:19.603 [Pipeline] } 00:34:19.622 [Pipeline] // catchError 00:34:19.631 [Pipeline] stage 00:34:19.633 [Pipeline] { (Stop VM) 00:34:19.648 [Pipeline] sh 00:34:19.924 + vagrant halt 00:34:24.107 ==> default: Halting domain... 00:34:29.432 [Pipeline] sh 00:34:29.709 + vagrant destroy -f 00:34:33.916 ==> default: Removing domain... 00:34:33.927 [Pipeline] sh 00:34:34.204 + mv output /var/jenkins/workspace/nvmf-tcp-vg-autotest/output 00:34:34.213 [Pipeline] } 00:34:34.230 [Pipeline] // stage 00:34:34.236 [Pipeline] } 00:34:34.254 [Pipeline] // dir 00:34:34.260 [Pipeline] } 00:34:34.278 [Pipeline] // wrap 00:34:34.285 [Pipeline] } 00:34:34.301 [Pipeline] // catchError 00:34:34.308 [Pipeline] stage 00:34:34.310 [Pipeline] { (Epilogue) 00:34:34.325 [Pipeline] sh 00:34:34.615 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:34:41.222 [Pipeline] catchError 00:34:41.224 [Pipeline] { 00:34:41.239 [Pipeline] sh 00:34:41.518 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:34:41.776 Artifacts sizes are good 00:34:41.786 [Pipeline] } 00:34:41.802 [Pipeline] // catchError 00:34:41.812 [Pipeline] archiveArtifacts 00:34:41.857 Archiving artifacts 00:34:42.020 [Pipeline] cleanWs 00:34:42.029 [WS-CLEANUP] Deleting project workspace... 00:34:42.030 [WS-CLEANUP] Deferred wipeout is used... 00:34:42.035 [WS-CLEANUP] done 00:34:42.036 [Pipeline] } 00:34:42.054 [Pipeline] // stage 00:34:42.059 [Pipeline] } 00:34:42.076 [Pipeline] // node 00:34:42.082 [Pipeline] End of Pipeline 00:34:42.121 Finished: SUCCESS